Search Results for: Banking

Fire Sale: Finance and Fascism in the Amazon Rainforest

Brasil2/E+/Getty

In a recent piece for Jacobin, climate writers Alyssa Battistoni and Thea Riofrancos drew a connection between fires burning in Greenland and those still ablaze in the Amazon rainforest: “They’re being sparked by the rich and powerful, whether by agricultural conglomerates, complicit right-wing governments, or fossil fuel executives who’ve lied to the public so they can keep spewing heat-trapping carbon up into the atmosphere for a quick buck.” The simplicity of the claim was dumbfounding, and, to that end, haunting. Was it merely the rich and powerful who lit the match?

Another writer for the magazine, Kate Aronoff, called for fossil fuel executives to be tried for crimes against humanity. “Technically speaking, what fossil-fuel companies do isn’t genocide,” she wrote, clarifying that energy CEOs don’t target their victims based on racial or ethnic animus. Yet genocidal land grabs are being carried out to expand “the Red Zone” — the agricultural frontier — eking its way deeper into the Amazon rainforest by way of roads and infrastructure backed by global capital. The Amazon, or the lungs of the earth, as it’s often referred to, is being seized from indigenous communities by mining and agribusiness interests, gutting the resiliency of one of the earth’s last great carbon sinks and producers of oxygen. But who is responsible for burning it? Bolsonaro? Corruption in Brazil? The World Bank? U.S. Financial Firms? Silicon Valley? Could the culprits be named, I wondered? Tried? Read more…

In the Age of the Psychonauts

Frank R. Paul, 1924. Forrest J. Ackerman Collection / CORBIS / Corbis via Getty Images.

Erik Davis | An excerpt adapted from High Weirdness: Drugs, Esoterica, and Visionary Experience in the Seventies | The MIT Press | 2019 | 35 minutes (9,207 words)

Early in Thus Spoke Zarathustra, Nietzsche’s prophet of the future discovers a tightrope walker preparing to perform in front of a crowd. It is here, crucially, that Zarathustra announces his famous doctrine of the übermensch, the overman, the superhero of the spirit. Humanity, he says, is merely a rope “fastened between animal and Overman,” a rope that passes over the abyss.

Elsewhere Nietzsche describes the spiritual acrobats who can rise to the call of the Overman as “philosophers of the future.” Nondogmatic, often solitary, with a predilection for risky behavior, these radical free thinkers are “curious to a fault, researchers to the point of cruelty, with unmindful fingers for the incomprehensible.” Nietzsche simply calls them those who attempt. Their truths are their own, rather than general facts, and they are “at home in many countries of the spirit, at least as guests.”

Sounds to me like Nietzsche is talking about psychonauts. After all, while we are used to comparing drug visionaries to mystical seekers, from another angle, they more resemble philosophers or mad scientists compelled, beyond reason but with some sense, to put themselves on the line, risking both paranoia and pathology through their anthropotechnics. Read more…

The No. 1 Ladies’ Defrauding Agency

Illustration by Matt Chinworth

Rose Eveleth | Longreads | July 2019 | 12 minutes (2,883 words)

Sarah Howe’s early life is mostly a mystery. There are no surviving photographs or sketches of her, so it’s impossible to know what she looked like. She may, at one point, have been married, but by 1877 she was single and working as a fortune-teller in Boston. It was a time of boom and invention in the United States. The country was rebuilding after the Civil War, industrial development was starting to take off, and immigration and urbanization were both increasing steadily. Money was flowing freely (to white people anyway), and men and women alike were putting that money into the nation’s burgeoning banks. In 1876, Alexander Graham Bell invented the telephone, and in 1879 Thomas Edison created the lightbulb. In between those innovations, Sarah Howe opened the Ladies’ Deposit Company, a bank run by women, for women. 

The company’s mission was simple: help white women gain access to the booming world of banking. The bank only accepted deposits from so-called “unprotected females,” women who did not have a husband or guardian handling their money. These women were largely overlooked by banks who saw them — and their smaller pots of money — as a waste of time. In return for their investment, Howe promised incredible results: an 8 percent interest rate. Deposit $100 now, and she promised an additional $96 back by the end of the year. And to sweeten the deal, new depositors got their first three months interest in advance. When skeptics expressed doubts that Howe could really guarantee such high returns, she offered an explanation: The Ladies’ Deposit Company was no ordinary bank, but instead was a charity for women, bankrolled by Quaker philanthropists. 

Word of the bank spread quickly among single women — housekeepers, schoolteachers, widows. Howe, often dressed in the finest clothes, enticed ladies to join, and encouraged them to spread the news among their friends and family. This word-of-mouth marketing strategy worked, Howe’s bank gathered investments from across the country in a time before easy long-distance communication. Money came in from Buffalo, Chicago, Baltimore, Pittsburgh, and Washington, all without Howe taking out a single newspaper advertisement. She opened a branch of the bank in New Bedford, Massachusetts, and had plans to add offices in Philadelphia and New York to keep up with the demand. Many of the women who deposited with the Ladies’ Deposit Company reinvested their profits back in the bank, putting their faith, and entire life savings, in Howe’s enterprise. All told, the Ladies Deposit would gather at least $250,000 from 800 women — although historians think far more women were involved. Some estimate that Howe collected more like $500,000, the equivalent of about $13 million today. 

It didn’t take long for the press to notice a woman encroaching on a man’s space. And not just any woman, a single woman who had once been a fortune-teller! “Who can believe for a moment that this woman, who a few years ago was picking up a living by clairvoyance and fortune-telling, is now the almoner of one of the greatest charities in the country?” asked the Boston Daily Advertiser. Reporters were particularly put off by their inability to access even the lobby of Howe’s bank, turned away at the door for being men. One particularly intrepid reporter, determined to find out what Howe’s secret was, returned dressed as a woman to gain entry and more information. 


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


Then, in 1880, it all came crashing down. On September 25, 1880, the Boston Daily Advertiser began a series of stories that exposed Howe’s bank as a fraud. Her 8 percent returns were too good to be true. Howe was operating what we now know as a Ponzi scheme — 40 years before Ponzi would try his hand at it. 

Here’s how it worked: When a new depositor arrived, Howe would use their money to pay out older clients, so the whole scheme required a constant influx of new depositors to pay out the old ones. Like every other Ponzi fraudster, Howe’s bank would have eventually run out of new money. The run of stories in the Boston Daily Advertiser instilled enough fear in the bank’s investors that they began to withdraw their money, and eventually there was a run on Howe’s bank. 

Sarah Howe was the most unfathomable and outrageous character: a woman villain.

It took two weeks and five days from the first story published in the Advertiser uncovering Howe’s fraud before she was arrested. The press extended her victims a modicum of sympathy, describing their plights while also reminding the reader that they deserved their pain for trusting a woman with their money. “I put every dollar I had into the bank, and if I lose it I am a beggar,” one depositor told the Boston Globe at the time. “I wanted the interest so badly, that I placed a mortgage on my furniture to secure the principal to deposit. Oh! I wish I hadn’t now, for I shall have my goods sold from under my head,” said another. 

Howe, on the other hand, was spared no remorse. The Boston Herald claimed that Howe was “nearly as deaf as a post” and cross-eyed. Banker’s Magazine described Howe as “short, fat, very ugly, and so illiterate as to be unable to write an English sentence, or to speak without making shameful blunders.” This is all untrue, as Howe’s own statements to the press before her downfall suggest that, in fact, she had a sharp wit. In response to one newspaper’s critique of the Ladies’ Deposit Bank, Howe wrote: “The fact is, my dear man, you really know nothing of the basis, means or methods on which our affairs are conducted, and when shut up in the meshes of your savings-bank notions, you attempt an exposition of the impossibility of our existence, you boggle and flounder about like a bat in a fly trap.” 

 Nevertheless, as soon as she was caught, a backstory for Howe emerged in the papers. The Boston Herald published a story with the headline “Mrs. Howe’s Unsavory Record,” claiming she was born out of wedlock and ran away at 15 to marry an “Indian physician,” who they also referred to as “her dark-skinned Othello.” The paper claimed the marriage caused her mother such distress that she wound up dying in an asylum “raving over the heartlessness of her daughter.” The story also alleged that she then left her first husband, married two house painters in quick succession, had been in and out of prison, and even tried to lure a young girl into prostitution. Basically none of this can be confirmed by historians, but it didn’t matter. Sarah Howe was the most unfathomable and outrageous character: a woman villain. As historian George Robb writes in his paper about Sarah Howe, “She had to be ugly, vulgar and immoral.” The only way her story could make sense to readers was if Howe was some kind of abomination — a complete outlier both physically and mentally.  

 “I’m sure she was just a normal-looking person,” Robb told me. “Until the whole thing unraveled, when people talked about her, no one described her as anything other than an ordinary person.” But in Victorian-era Boston, the idea that a woman criminal could be an “ordinary person” was impossible. “People were comfortable with the idea of women as victims,” Robb told me. “The men were the crooks, the men were doing the manipulation. The women were the victims. They needed to be protected by other men.” 

Howe wound up standing trial in Boston, and was ultimately convicted (although not of fraud, but soliciting money under false pretenses — for claiming that a Quaker charity was backing the venture). She spent three years in prison, and when she got out, in classic scammer fashion, she tried the whole thing again.

“I think there’s a similarity between being a fortune-teller and making money on the stock market, making predictions about the future”

Next, Howe opened up a new Woman’s Bank on West Concord Street in Boston. She kept the scheme going from 1884 to 1886, offering depositors 7 percent interest and gathering at least $50,000, although historians think the number might be far higher. This time, however, Howe was never prosecuted. After being caught and closing down her bank, she gave up the game and returned to fortune-telling and doing astrology readings for 25 cents each. She died in 1892, at the age of 65, no longer wealthy, but still notorious enough to warrant an obituary in the New York Times that read: “For three months she had been living in a boarding and lodging house, carefully keeping from those whom she met the knowledge that she was the notorious Mrs. Howe of Woman’s Bank memory.” 

***

Sarah Howe was, in some ways, a product of her time. In the late 1800s, the United States was moving out of a period marked by “free banks,” in which there were very limited rules governing banks, and into a system of national banking more familiar to us today. Money was flowing into the economy, and financial advisers were telling their clients to put their cash in banks that were now more stable than they had been in the past. This advice was often targeted at women, who couldn’t use their money to, say, start their own endeavors. But they could put their money in stocks and banks, and many of them did. In fact, during that time, women were often the majority of depositors and shareholders.

But there were very few regulations on banks. The stock market was relatively new. For women like Howe, it presented an unregulated place where money was changing hands purely on the basis of confidence. And as a fortune-teller, Howe had plenty. “I think there’s a similarity between being a fortune-teller and making money on the stock market, making predictions about the future, and getting people to believe that you know something about how the trends are going to play,” Robb said. 

At the time there was little fear when it came to watchdogs or regulators. Howe could start her own bank with no real procedure or oversight. “Anybody could form a bank!” Robb said, “If you could get people to give you money you could call it a bank. You advertise, you rent a fancy office space, people come and give you money. It was amazing how much money you could make before anybody caught you.” As much as people love to point fingers at Howe, very rarely do people consider the complete lack of oversight that allowed her to prey upon these women. “It’s so much easier to pick individual villains and say, ‘Oh it’s these nasty scheming people who are the problem, the capitalist system can do no wrong, it’s perfect and self-regulating and we don’t want to mess with that. It’s these individual crooks that are the problem.’” 

***

In spite of her crimes, Sarah Howe is not a household name. It’s not called a Howe scheme after all, it’s a Ponzi scheme. When Howe is mentioned at all, it’s as a punchline. She’s forever stuck as a historical fun fact. “She’s become an anecdote in history, but she should be as famous or more famous than Ponzi,” historian Robyn Hulsart told me. “There’s nothing about what she did that doesn’t fit the definition of a Ponzi scheme.” (In fact, Howe wasn’t even the first to execute this type of scam. At least two other women pulled off Ponzi schemes before her — one in Berlin, the other in Madrid.) 

It’s become popular now to say that we’re living through the golden era of the scammer. “We’re living in a scammer’s paradise,” Sarah Jeong told Willamette Week recently about our current era, “not just economic scams, but intellectual scams, too.” Elizabeth Holmes, Anna Delvey, Fyre Fest, Ailey O’Toole, Jennifer Lee, Anna March — the list is long enough that everybody from WIRED to The Cut called 2018 “the year of the scam.” As the United States recovers from the fraud that was that housing market bubble, we’re in another era of deregulation. President Donald Trump and the Republican run Senate, have gone on what has been called a “deregulation spree,” increasing the cap at which banks become subject to more stringent rules from $50 billion in assets to $250 billion. Robb pointed out that we never seem to actually learn. “Whenever there’s a big boom cycle in the economy everybody screams to deregulate,” he told me, and with deregulation comes increased risk for frauds like Howe’s. 

Howe’s case also demonstrates a struggle in feminist circles that persists today: How do you balance the desire to celebrate women with the need to hold bad behavior accountable?

Howe’s legacy could and should be one that we can learn from today in the so-called era of the scam. Howe’s success was one that tells us something not just about fraud, but about economics and the conditions under which fraud can blossom into a $17 million scam. Howe was aided and abetted by the economic conditions, but she was also a wizard at her craft. What Howe mastered, beyond the Ponzi scheme, is what experts call an “affinity fraud” — going after a group of people who have something in common, and most often who the scammer has something in common with too. As an “unprotected” woman herself, Howe understood what might appeal to her clientele. She decorated the bank to create a mood and aesthetic that would appeal to her ideal mark. The Advertiser described the Ladies’ Deposit Bank this way: “The furniture, of which there are many pieces, is upholstered in raw silk of old gold figured patterns, and corresponds in tone and design with the walls. … The carpets are of a deep warm tone, and all the ornaments are rich and in good taste.” She used language that drew women in, talking about her commitment to the “overworked, ill-paid sisterhood.” Hulsart points out that it’s not unlike the language used by multilevel marketing companies like Mary Kay and Amway, which generally advertise to women through  word of mouth. “They really like to say things like ‘we’re in this together,” Hulsart says.  Read more…

The Rise and Fall of the Bank Robbery Capital of the World

Longreads Pick

In 1992, there were 2,641 bank robberies in Los Angeles — “one every 45 minutes of each banking day.” How did L.A. become the epicenter of the heist? Thanks to the dangerous combination of cars, convenience, and cocaine.

Source: CrimeReads
Published: Jun 11, 2019
Length: 20 minutes (5,164 words)

Talk Like an Egyptian

Illustration by Homestead

Cary Barbor | Longreads | June 2019 | 14 minutes (3,384 words)

My new husband Mike reached into the suitcase open on the bed. He picked up my olive green cotton jacket between his thumb and forefinger. Worn and soft from many washings, it was a favorite. I liked its Mao collar and faux-wood buttons.

“You can’t wear that with these people,” he said.

Mike learned English as a teenager and sometimes uses odd and distancing phrases like that, like “these people,” to talk about people very close to him. The people closest to him.

“What people?”

“My mom; my stepfather. They are formal,” he explained, placing the jacket on the bed. I would need the proper clothes to fit in.

Seurat’s “A Sunday Afternoon on the Island of La Grand Jatte” flashed in my mind. Men with top hats and women with parasols. Formal like that? I didn’t have clothes for that. I had met his parents briefly at their apartment in Cannes, in the south of France. I thought I had passed muster. But now I wasn’t sure. And now I was packing for a long stay with them in Cairo, their real home, where I would be even more of an alien.

I grew up in a suburb of Philadelphia, and not one of the fancy ones. My father was a chemical engineer for an oil company and my mom, a homemaker and then a secretary. My two older brothers, my older sister, and I went to public school and Catholic church every Sunday. We were certainly never hungry. But there was always a whiff of “not enough” in the house. If we wanted new shoes, we had to show our mother the old ones with actual holes in them. I realized later that was more about her childhood home, with a mentally ill and unemployable father, than the financial status of ours. Still, that feeling hung in the air, getting into the fabric like smoke.
Read more…

How the Guardian Went Digital

Newscast Limited via AP Images

Alan Rusbridger | Breaking News | Farrar, Straus and Giroux | November 2018 | 31 minutes (6,239 words)

 

In 1993 some journalists began to be dimly aware of something clunkily referred to as “the information superhighway” but few had ever had reason to see it in action. At the start of 1995 only 491 newspapers were online worldwide: by June 1997 that had grown to some 3,600.

In the basement of the Guardian was a small team created by editor in chief Peter Preston — the Product Development Unit, or PDU. The inhabitants were young and enthusiastic. None of them were conventional journalists: I think the label might be “creatives.” Their job was to think of new things that would never occur to the largely middle-aged reporters and editors three floors up.

The team — eventually rebranding itself as the New Media Lab — started casting around for the next big thing. They decided it was the internet. The creatives had a PC actually capable of accessing the world wide web. They moved in hipper circles. And they started importing copies of a new magazine, Wired — the so-called Rolling Stone of technology — which had started publishing in San Francisco in 1993, along with the HotWired website. “Wired described the revolution,” it boasted. “HotWired was the revolution.” It was launched in the same month the Netscape team was beginning to assemble. Only 18 months later Netscape was worth billions of dollars. Things were moving that fast.

In time, the team in PDU made friends with three of the people associated with Wired. They were the founders, Louis Rossetto, and Jane Metcalfe; and the columnist Nicholas Negroponte, who was based at the Massachusetts Institute of Technology and who wrote mindblowing columns predicting such preposterous things as wristwatches which would “migrate from a mere timepiece today to a mobile command-and-control center tomorrow . . . an all-in-one, wrist-mounted TV, computer, and telephone.”

As if.

Both Rossetto and Negroponte were, in their different ways, prophets. Rossetto was a hot booking for TV talk shows, where he would explain to baffled hosts what the information superhighway meant. He’d tell them how smart the internet was, and how ethical. Sure, it was a “dissonance amplifier.” But it was also a “driver of the discussion” towards the real. You couldn’t mask the truth in this new world, because someone out there would weigh in with equal force. Mass media was one-way communication. The guy with the antenna could broadcast to billions, with no feedback loop. He could dominate. But on the internet every voice was going to be equal to every other voice.

“Everything you know is wrong,” he liked to say. “If you have a preconceived idea of how the world works, you’d better reconsider it.”

Negroponte, 50-something, East Coast gravitas to Rossetto’s Californian drawl, was working on a book, Being Digital, and was equally passionate in his evangelism. His mantra was to explain the difference between atoms — which make up the physical artifacts of the past — and bits, which travel at the speed of light and would be the future. “We are so unprepared for the world of bits . . . We’re going to be forced to think differently about everything.”

I bought the drinks and listened.

Over dinner in a North London restaurant, Negroponte started with convergence — the melting of all boundaries between TV, newspapers, magazines, and the internet into a single media experience — and moved on to the death of copyright, possibly the nation state itself. There would be virtual reality, speech recognition, personal computers with inbuilt cameras, personalized news. The entire economic model of information was about to fall apart. The audience would pull rather than wait for old media to push things as at present. Information and entertainment would be on demand. Overly hierarchical and status-conscious societies would rapidly erode. Time as we knew it would become meaningless — five hours of music would be delivered to you in less than five seconds. Distance would become irrelevant. A UK paper would be as accessible in New York as it was in London.

Writing 15 years later in the Observer, the critic John Naughton compared the begetter of the world wide web, Sir Tim Berners-Lee, with the seismic disruption five centuries earlier caused by the invention of movable type. Just as Gutenberg had no conception of his invention’s eventual influence on religion, science, systems of ideas, and democracy, so — in 2008 — “it will be decades before we have any real understanding of what Berners-Lee hath wrought.”

The entire economic model of information was about to fall apart.

And so I decided to go to America with the leader of the PDU team, Tony Ageh, and see the internet for myself. A 33-year-old “creative,” Ageh had had exactly one year’s experience in media — as an advertising copy chaser for The Home Organist magazine — before joining the Guardian. I took with me a copy of The Internet for Dummies. Thus armed, we set off to America for a four-day, four-city tour.

In Atlanta, we found the Atlanta Journal-Constitution (AJC), which was considered a thought leader in internet matters, having joined the Prodigy Internet Service, an online service offering subscribers information over dial-up 1,200 bit/second modems. After four months the internet service had 14,000 members, paying 10 cents a minute to access online banking, messaging, full webpage hosting and live share prices.

The AJC business plan envisaged building to 35,000 or 40,000 by year three. But that time, they calculated, they would be earning $3.3 million in subscription fees and $250,000 a year in advertising. “If it all goes to plan,’ David Scott, the publisher, Electronic Information Service, told us, ‘it’ll be making good money. If it goes any faster, this is a real business.”

We also met Michael Gordon, the managing editor. “The appeal to the management is, crudely, that it is so much cheaper than publishing a newspaper,” he said.

We wrote it down.

“We know there are around 100,000 people in Atlanta with PCs. There are, we think, about one million people wealthy enough to own them. Guys see them as a toy; women see them as a tool. The goldmine is going to be the content, which is why newspapers are so strongly placed to take advantage of this revolution. We’re out to maximize our revenue by selling our content any way we can. If we can sell it on CD-ROM or TV as well, so much the better.”

“Papers? People will go on wanting to read them, though it’s obviously much better for us if we can persuade them to print them in their own homes. They might come in customized editions. Edition 14B might be for females living with a certain income.”

It was heady stuff.

From Atlanta we hopped up to New York to see the Times’s online service, @Times. We found an operation consisting of an editor plus three staffers and four freelancers. The team had two PCs, costing around $4,000 each. The operation was confident, but small.

The @Times content was weighted heavily towards arts and leisure. The opening menus offered a panel with about 15 reviews of the latest films, theatre, music, and books – plus book reviews going back two years. The site offered the top 15 stories of the day, plus some sports news and business.

There was a discussion forum about movies, with 47 different subjects being debated by 235 individual subscribers. There was no archive due to the fact that — in one of the most notorious newspaper licensing cock-ups in history — the NYT in 1983 had given away all rights to its electronic archive (for all material more than 24 hours old) in perpetuity to Mead/Lexis.

That deal alone told you how nobody had any clue what was to come.

We sat down with Henry E. Scott, the group director of @Times. “Sound and moving pictures will be next. You can get them now. I thought about it the other day, when I wondered about seeing 30 seconds of The Age of Innocence. But then I realized it would take 90 minutes to download that and I could have seen more or less the whole movie in that time. That’s going to change.”

But Scott was doubtful about the lasting value of what they were doing — at least, in terms of news. “I can’t see this replacing the news- paper,” he said confidently. “People don’t read computers unless it pays them to, or there is some other pressing reason. I don’t think anyone reads a computer for pleasure. The San Jose Mercury [News] has put the whole newspaper online. We don’t think that’s very sensible. It doesn’t make sense to offer the entire newspaper electronically.”

We wrote it all down.

“I can’t see the point of news on-screen. If I want to know about a breaking story I turn on the TV or the radio. I think we should only do what we can do better than in print. If it’s inferior than the print version there’s no point in doing it.”

Was there a business plan? Not in Scott’s mind. “There’s no way you can make money out of it if you are using someone else’s server. I think the LA Times expects to start making money in about three years’ time. We’re treating it more as an R & D project.”


Kickstart your weekend reading by getting the week’s best Longreads delivered to your inbox every Friday afternoon.

Sign up


From New York we flitted over to Chicago to see what the Tribune was up to. In its 36-storey Art Deco building — a spectacular monument to institutional self-esteem — we found a team of four editorial and four marketing people working on a digital service, with the digital unit situated in the middle of the newsroom. The marketeers were beyond excited about the prospect of being able to show houses or cars for sale and arranged a demonstration. We were excited, too, even if the pictures were slow and cumbersome to download.

We met Joe Leonard, associate editor. “We’re not looking at Chicago Online as a money maker. We’ve no plans even to break even at this stage. My view is simply that I’m not yet sure where I’m going, but I’m on the boat, in the water — and I’m ahead of the guy who is still standing on the pier.”

Reach before revenue.

Finally we headed off to Boulder, Colorado, in the foothills of the Rockies, where Knight Ridder had a team working on their vision of the newspaper of tomorrow. The big idea was, essentially, what would become the iPad — only the team in Boulder hadn’t got much further than making an A4 block of wood with a “front page” stuck on it. The 50-something director of the research centre, Roger Fidler, thought the technology capable of realizing his dream of a ‘personal information appliance’ was a couple of years off.

Tony and I had filled several notebooks. We were by now beyond tired and talked little over a final meal in an Italian restaurant beneath the Rocky Mountains.

We had come. We had seen the internet. We were conquered.

* * *

Looking back from the safe distance of nearly 25 years, it’s easy to mock the fumbling, wildly wrong predictions about where this new beast was going to take the news industry. We had met navigators and pioneers. They could dimly glimpse where the future lay. Not one of them had any idea how to make a dime out of it, but at the same time they intuitively sensed that it would be more reckless not to experiment. It seemed reasonable to assume that — if they could be persuaded to take the internet seriously — their companies would dominate in this new world, as they had in the old world.

We were no different. After just four days it seemed blindingly obvious that the future of information would be mainly digital. Plain old words on paper — delivered expensively by essentially Victorian production and distribution methods — couldn’t, in the end, compete. The future would be more interactive, more image-driven, more immediate. That was clear. But how on earth could you graft a digital mindset and processes onto the stately ocean liner of print? How could you convince anyone that this should be a priority when no one had yet worked out how to make any money out of it? The change, and therefore the threat, was likely to happen rapidly and maybe violently. How quickly could we make a start? Or was this something that would be done to us?

In a note for Peter Preston on our return I wrote, “The internet is fascinating, intoxicating . . . it is also crowded out with bores, nutters, fanatics and middle managers from Minnesota who want the world to see their home page and CV. It’s a cacophony, a jungle. There’s too much information out there. We’re all overloaded. You want someone you trust to fillet it, edit it and make sense of it for you. That’s what we do. It’s an opportunity.”

Looking back from the safe distance of nearly 25 years, it’s easy to mock the fumbling, wildly wrong predictions about where this new beast was going to take the news industry.

I spent the next year trying to learn more and then the calendar clicked on to 1995 — The Year the Future Began, at least according to a recent book by the cultural historian W. Joseph Campbell, who used the phrase as his book title twenty years later. It was the year Amazon.com, eBay, Craigslist, and Match.com established their presence online. Microsoft spent $300m launching Windows 95 with weeks of marketing hype, spending millions for the rights to the Rolling Stones hit “Start Me Up,” which became the anthem for the Windows 95 launch.

Cyberspace — as the cyber dystopian Evgeny Morozov recalled, looking back on that period — felt like space itself. “The idea of exploring cyberspace as virgin territory, not yet colonized by governments and corporations, was romantic; that romanticism was even reflected in the names of early browsers (‘Internet Explorer,’ ‘Netscape Navigator’).”

But, as Campbell was to reflect, “no industry in 1995 was as ill-prepared for the digital age, or more inclined to pooh-pooh the disruptive potential of the Internet and World Wide Web, than the news business.” It suffered from what he called “innovation blindness” — “an inability, or a disinclination to anticipate and understand the consequences of new media technology.”

1995 was, then, the year the future began. It happened also to be the year in which I became editor of the Guardian.

* * *

I was 41 and had not, until very recently, really imagined this turn of events. My journalism career took a traditional enough path. A few years reporting; four years writing a daily diary column; a stint as a feature writer — home and abroad. In 1986 I left the Guardian to be the Observer’s television critic. When I rejoined the Guardian I was diverted towards a route of editing — launching the paper’s Saturday magazine followed by a daily tabloid features section and moving to be deputy editor in 1993. Peter Preston — unshowy, grittily obstinate, brilliantly strategic — looked as if he would carry on editing for years to come. It was a complete surprise when he took me to the basement of the resolutely unfashionable Italian restaurant in Clerkenwell he favored, to tell me he had decided to call it a day.

On most papers the proprietor or chief executive would find an editor and take him or her out to lunch to do the deal. On the Guardian — at least according to tradition dating back to the mid-70s — the Scott Trust made the decision after balloting the staff, a process that involved manifestos, pub hustings, and even, by some candidates, a little frowned-on campaigning.

I supposed I should run for the job. My mission statement said I wanted to boost investigative reporting and get serious about digital. It was, I fear, a bit Utopian. I doubt much of it impressed the would-be electorate. British journalists are programmed to skepticism about idealistic statements concerning their trade. Nevertheless, I won the popular vote and was confirmed by the Scott Trust after an interview in which I failed to impress at least one Trustee with my sketchy knowledge of European politics. We all went off for a drink in the pub round the back of the office. A month later I was editing.

“Fleet Street,” as the UK press was collectively called, was having a torrid time, not least because the biggest beast in the jungle, Rupert Murdoch, had launched a prolonged price war that was playing havoc with the economics of publishing. His pockets were so deep he could afford to slash the price of The Times almost indefinitely — especially if it forced others out of business.

Reach before revenue — as it wasn’t known then.

The newest kid on the block, the Independent, was suffering the most. To their eyes, Murdoch was behaving in a predatory way. We calculated the Independent titles were losing around £42 million (nearly £80 million in today’s money). Murdoch’s Times, by contrast, had seen its sales rocket 80 per cent by cutting its cover prices to below what it cost to print and distribute. The circulation gains had come at a cost — about £38 million in lost sales revenue. But Murdoch’s TV business, BSkyB, was making booming profits and the Sun continued to throw off huge amounts of cash. He could be patient.

But how on earth could you graft a digital mindset and processes onto the stately ocean liner of print.

The Telegraph had been hit hard — losing £45 million in circulation revenues through cutting the cover price by 18 pence. The end of the price war left it slowly clawing back lost momentum, but it was still £23 million adrift of where it had been the previous year. Murdoch — as so often — had done something bold and aggressive. Good for him, not so good for the rest of us. Everyone was tightening their belts in different ways. The Independent effectively gave up on Scotland. The Guardian saved a million a year in newsprint costs by shaving half an inch off the width of the paper.

The Guardian, by not getting into the price war, had “saved” around £37 million it would otherwise have lost. But its circulation had been dented by about 10,000 readers a day. Moreover, the average age of the Guardian reader was 43 — something that pre-occupied us rather a lot. We were in danger of having a readership too old for the job advertisements we carried.

Though the Guardian itself was profitable, the newspaper division was losing nearly £12 million (north of £21 million today). The losses were mainly due to the sister Sunday title, the Observer, which the Scott Trust had purchased as a defensive move against the Independent in 1993. The Sunday title had a distinguished history, but was hemorrhaging cash: £11 million losses.

Everything we had seen in America had to be put on hold for a while. The commercial side of the business never stopped reminding us that only three percent of households owned a PC and a modem.

* * *

But the digital germ was there. My love of gadgets had not extended to understanding how computers actually worked, so I commissioned a colleague to write a report telling me, in language I could understand, how our computers measured up against what the future would demand. The Atex system we had installed in 1987 gave everyone a dumb terminal on their desk — little more than a basic word processor. It couldn’t connect to the internet, though there was a rudimentary internal messaging system. There was no word count or spellchecker and storage space was limited. It could not be used with floppy disks or CD-ROMs. Within eight years of purchase it was already a dinosaur.

There was one internet connection in the newsroom, though most reporters were unaware of it. It was rumored that downstairs a bloke called Paul in IT had a Mac connected to the internet through a dial-up modem. Otherwise we were sealed off from the outside world.

Some of these journalist geeks began to invent Heath Robinson solutions to make the inadequate kit in Farringdon Road to do the things we wanted in order to produce a technology website online. Tom Standage — he later became deputy editor of the Economist, but then was a freelance tech writer — wrote some scripts to take articles out of Atex and format them into HTML so they could be moved onto the modest Mac web server — our first content management system, if you like. If too many people wanted to read this tech system at once the system crashed. So Standage and the site’s editor, Azeem Azhar, would take it in turns sitting in the server room in the basement of the building rebooting the machines by hand — unplugging them and physically moving the internet cables from one machine to another.

What would the future look like? We imagined personalized editions, even if we had not the faintest clue how to produce them. We guessed that readers might print off copies of the Guardian in their homes — and even toyed with the idea of buying every reader a printer. There were glimmers of financial hope. Our readers were spending £56 million a year buying the Guardian but we retained none of it: the money went on paper and distribution. In the back of our minds we ran calculations about how the economics of newspapers would change if we could save ourselves the £56 million a year “old world” cost.

By March 1996, ideas we’d hatched in the summer of 1995 to graft the paper onto an entirely different medium were already out of date. That was a harbinger of the future.

On top of editing, the legal entanglements sometimes felt like a full-time job on their own. Trying to engineer a digital future for the Guardian felt like a third job. There were somehow always more urgent issues. By March 1996, ideas we’d hatched in the summer of 1995 to graft the paper onto an entirely different medium were already out of date. That was a harbinger of the future. No plans in the new world lasted very long.

It was now apparent that we couldn’t get away with publishing selective parts of the Guardian online. Other newspapers had shot that fox by pushing out everything. We were learning about the connectedness of the web — and the IT team tentatively suggested that we might use some “offsite links” to other versions of the same story to save ourselves the need to write our own version of everything. This later became the mantra of the City University of New York (CUNY) digital guru Jeff Jarvis — “Do what you do best, and link to the rest.”

We began to grapple with numerous basic questions about the new waters into which we were gingerly dipping our toes.

Important question: Should we charge?

The Times and the Telegraph were both free online. A March 1996 memo from Bill Thompson, a developer who had joined the Guardian from Pipex, ruled it out:

I do not believe the UK internet community would pay to read an online edition of a UK newspaper. They may pay to look at an archive, but I would not support any attempt to make the Guardian a subscription service online . . . It would take us down a dangerous path.

In fact, I believe that the real value from an online edition will come from the increased contact it brings with our readers: online newspapers can track their readership in a way that print products never can, and the online reader can be a valuable commodity in their own right, even if they pay nothing for the privilege.

Thompson was prescient about how the overall digital economy would work — at least for players with infinitely larger scale and vastly more sophisticated technology.

What time of day should we publish?

The electronic Telegraph was published at 8 a.m. each day — mainly because of its print production methods. The Times, more automated, was available as soon as the presses started rolling. The Guardian started making some copy available from first edition through to the early hours. It would, we were advised, be fraught with difficulties to publish stories at the same time they were ready for the press.

Why were we doing it anyway?

Thompson saw the dangers of cannibalization, that readers would stop buying the paper if they could read it for free online. It could be seen as a form of marketing. His memo seemed ambivalent as to whether we should venture into this new world at all:

The Guardian excels in presenting information in an attractive easy to use and easy to navigate form. It is called a “broadsheet newspaper.” If we try to put the newspaper on-line (as the Times has done) then we will just end up using a new medium to do badly what an old medium does well. The key question is whether to make the Guardian a website, with all that entails in terms of production, links, structure, navigational aids etc. In summer 1995 we decided that we would not do this.

But was that still right a year later? By now we had the innovation team — PDU — still in the basement of one building in Farringdon Road, and another team in a Victorian loft building across the way in Ray Street. We were, at the margins, beginning to pick up some interesting fringe figures who knew something about computers, if not journalism. But none of this was yet pulling together into a coherent picture of what a digital Guardian might look like.

An 89-page business plan drawn up in October 1996 made it plain where the priorities lay: print.

We wanted to keep growing the Guardian circulation — aiming a modest increase to 415,000 by March 2000 — which would make us the ninth-biggest paper in the UK — with the Observer aiming for 560,000 with the aid of additional sections. A modest investment of £200,000 a year in digital was dwarfed by an additional £6 million cash injection into the Observer, spread over three years.

As for “on-line services” (we were still hyphenating it) we did want “a leading-edge presence” (whatever that meant), but essentially we thought we had to be there because we had to be there. By being there we would learn and innovate and — surely? — there were bound to be commercial opportunities along the road. It wasn’t clear what.

We decided we might usefully take broadcasting, rather than print, as a model — emulating its “immediacy, movement searchability and layering.”

If this sounded as if we were a bit at sea, we were. We hadn’t published much digitally to this point. We had taken half a dozen meaty issues — including parliamentary sleaze, and a feature on how we had continued to publish on the night our printing presses had been blown up by the IRA — and turned them into special reports.

It is a tribute to our commercial colleagues that they managed to pull in the thick end of half a million pounds to build these websites. Other companies’ marketing directors were presumably like ours — anxious about the youth market and keen for their brands to feel “cool.” In corporate Britain in 1996, there was nothing much cooler than the internet, even if not many people had it, knew where to find it or understood what to do with it.

* * *

The absence of a controlling owner meant we could run the Guardian in a slightly different way from some papers. Each day began with a morning conference open to anyone on the staff. In the old Farringdon Road office, it was held around two long narrow tables in the editor’s office — perhaps 30 or 40 people sitting or standing. When we moved to our new offices at Kings Place, near Kings Cross in North London, we created a room that was, at least theoretically, less hierarchical: a horseshoe of low yellow sofas with a further row of stools at the back. In this room would assemble a group of journalists, tech developers and some visitors from the commercial departments every morning at about 10 a.m. If it was a quiet news day we might expect 30 or so. On big news days, or with an invited guest, we could host anything up to 100.

A former Daily Mail journalist, attending his first morning conference, muttered to a colleague in the newsroom that it was like Start the Week — a Monday morning BBC radio discussion program. All talk and no instructions. In a way, he was right: It was difficult, in conventional financial or efficiency terms, to justify 50 to 60 employees stopping work to gather together each morning for anything between 25 and 50 minutes. No stories were written during this period, no content generated.

But something else happened at these daily gatherings. Ideas emerged and were kicked around. Commissioning editors would pounce on contributors and ask them to write the thing they’d just voiced. The editorial line of the paper was heavily influenced, and sometimes changed, by the arguments we had. The youngest member of staff would be in the same room as the oldest: They would be part of a common discussion around news. By a form of accretion and osmosis an idea of the Guardian was jointly nourished, shared, handed down, and crafted day by day.

You might love the Guardian or despise it, but it had a definite sense of what it believed in and what its journalism was.

It led to a very strong culture. You might love the Guardian or despise it, but it had a definite sense of what it believed in and what its journalism was. It could sometimes feel an intimidating meeting — even for, or especially for, the editor. The culture was intended to be one of challenge: If we’d made a wrong decision, or slipped up factually or tonally, someone would speak up and demand an answer. But challenge was different from blame: It was not a meeting for dressing downs or bollockings. If someone had made an error the previous day we’d have a post-mortem or unpleasant conversation outside the room. We’d encourage people to want to contribute to this forum, not make them fear disapproval or denunciation.

There was a downside to this. It could, and sometimes did, lead to a form of group-think. However herbivorous the culture we tried to nurture, I was conscious of some staff members who felt awkward about expressing views outside what we hoped was a  fairly broad consensus. But, more often, there would be a good discussion on two or three of the main issues of the day. We encouraged specialists or outside visitors to come in and discuss breaking stories. Leader writers could gauge the temperature of the paper before penning an editorial. And, from time to time, there would be the opposite of consensus: Individuals, factions, or groups would come and demand we change our line on Russia, bombing in Bosnia; intervention in Syria; Israel, blood sports, or the Labor leadership.

The point was this: that the Guardian was not one editor’s plaything or megaphone. It emerged from a common conversation — and was open to internal challenge when editorial staff felt uneasy about aspects of our journalism or culture.

* * *

Within two years — slightly uncomfortable at the power I had acquired as editor — I gave some away. I wanted to make correction a natural part of the journalistic process, not a bitterly contested post-publication battleground designed to be as difficult as possible.

We created a new role on the Guardian: a readers’ editor. He or she would be the first port of call for anyone wanting to complain about anything we did or wrote. The readers’ editor would have daily space in the paper — off-limits to the editor — to correct or clarify anything and would also have a weekly column to raise broader issues of concern. It was written into the job description that the editor could not interfere. And the readers’ editor was given the security that he/she could not be removed by the editor, only by the Scott Trust.

On most papers editors had sat in judgment on themselves. They commissioned pieces, edited and published them — and then were supposed neutrally to assess whether their coverage had, in fact, been truthful, fair, and accurate. An editor might ask a colleague — usually a managing editor — to handle a complaint, but he/she was in charge from beginning to end. It was an autocracy. That mattered even more in an age when some journalism was moving away from mere reportage and observation to something closer to advocacy or, in some cases, outright pursuit.

Allowing even a few inches of your own newspaper to be beyond your direct command meant that your own judgments, actions, ethical standards and editorial decisions could be held up to scrutiny beyond your control. That, over time, was bound to change your journalism. Sunlight is the best disinfectant: that was the journalist-as-hero story we told about what we do. So why wouldn’t a bit of sunlight be good for us, too?

The first readers’ editor was Ian Mayes, a former arts and obituaries editor then in his late 50s. We felt the first person in the role needed to have been a journalist — and one who would command instant respect from a newsroom which otherwise might be somewhat resistant to having their work publicly critiqued or rebutted. There were tensions and some resentment, but Ian’s experience, fairness and flashes of humor eventually won most people round.

One or two of his early corrections convinced staff and readers alike that he had a light touch about the fallibility of journalists:

In our interview with Sir Jack Hayward, the chairman of Wolverhampton Wanderers, page 20, Sport, yesterday, we mistakenly attributed to him the following comment: “Our team was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.” Sir Jack had just declined the offer of a hot drink. What he actually said was: “Our tea was the worst in the First Division and I’m sure it’ll be the worst in the Premier League.” Profuse apologies.

In an article about the adverse health effects of certain kinds of clothing, pages 8 and 9, G2, August 5, we omitted a decimal point when quoting a doctor on the optimum temperature of testicles. They should be 2.2 degrees Celsius below core body temperature, not 22 degrees lower.

But in his columns he was capable of asking tough questions about our editorial decisions —  often prompted by readers who had been unsettled by something we had done. Why had we used a shocking picture which included a corpse? Were we careful enough in our language around mental health or disability? Why so much bad language in the Guardian? Were we balanced in our views of the Kosovo conflict? Why were Guardian journalists so innumerate? Were we right to link to controversial websites?

In most cases Mayes didn’t come down on one side or another. He would often take readers’ concerns to the journalist involved and question them — sometimes doggedly — about their reasoning. We learned more about our readers through these interactions; and we hoped that Mayes’s writings, candidly explaining the workings of a newsroom, helped readers better understand our thinking and processes.

It was, I felt, good for us to be challenged in this way. Mayes was invaluable in helping devise systems for the “proper” way to correct the record. A world in which — to coin a phrase —  you were “never wrong for long” posed the question of whether you went in for what Mayes termed “invisible mending.” Some news organizations would quietly amend whatever it was that they had published in error, no questions asked. Mayes felt differently: The act of publication was something on the record. If you wished to correct the record, the correction should be visible.

But we had some inkling that the iron grip of centralized control that a newspaper represented was not going to last.

We were some years off the advent of social media, in which any error was likely to be pounced on in a thousand hostile tweets. But we had some inkling that the iron grip of centralized control that a newspaper represented was not going to last.

I found liberation in having created this new role. There were few things editors can enjoy less than the furious early morning phone call or email from the irate subject of their journalism. Either the complainant is wrong — in which case there is time wasted in heated self-justification; or they’re right, wholly or partially. Immediately you’re into remorseful calculations about saving face. If readers knew we honestly and rapidly — even immediately — owned up to our mistakes they should, in theory, trust us more. That was the David Broder theory, and I bought it. Readers certainly made full use of the readers’ editor’s existence. Within five years Mayes was dealing with around 10,000 calls, emails, and letters a year — leading to around 1,200 corrections, big and small. It’s not, I think, that we were any more error-prone than other papers. But if you win a reputation for openness, you’d better be ready to take it as seriously as your readers will.

Our journalism became better. If, as a journalist, you know there are a million sleuth-eyed editors out there waiting to leap on your tiniest mistake, it makes you more careful. It changes the tone of your writing. Our readers often know more than we do. That became a mantra of the new world, coined by the blogger and academic Dan Gillmor, in his 2004 book We the Media8 but it was already becoming evident in the late 1990s.

The act of creating a readers’ editor felt like a profound recognition of the changing nature of what we were engaged in. Journalism was not an infallible method guaranteed to result in something we would proclaim as The Truth — but a more flawed, tentative, iterative and interactive way of getting towards something truthful.

Admitting that felt both revolutionary and releasing.

***

Excerpted from Breaking News: The Remaking of Journalism and Why It Matters Now by Alan Rusbridger. Published Farrar, Straus and Giroux November 27, 2018. Copyright © 2018 by Alan Rusbridger. All rights reserved.

Longreads Editor: Aaron Gilbreath

Preparing for a Post-Roe America

Ralph Grunewald / Getty

Laura Barcella | Longreads | February 2019 | 13 minutes (3,517 words)

The 46th anniversary of Roe v. Wade just occurred on January 22 — but the days of relatively uncomplicated American abortion access are, most likely, numbered. In fact, author Robin Marty believes it’s not a matter of if Roe will be overturned, it’s a matter of when.

For more than ten years, the Minneapolis-based freelance reporter and author of the new book Handbook for a Post-Roe America has been diligently chronicling the twists and turns of both the pro-choice and anti-abortion movements. Ever since Supreme Court Justice Anthony Kennedy announced his resignation, Marty — like many other pro-choice Americans — has been waiting for the proverbial pro-life shoe to drop. Losing Kennedy, the swing voter on a number of major abortion rulings, and gaining Brett Kavanaugh — a long-time pro-life ally — seems to all but ensure the end of Roe, as well as the downfall of abortion being considered a constitutional right.

Indeed, several weeks after Marty and I spoke in late January, Kavanaugh voted with a minority of Justices to overturn recent Court precedent in favor of a law that sought to impose a new form of undue burden on abortion-seekers in Louisiana. The Cut called Kavanaugh’s dissenting opinion something verging on gaslighting. In it, he postulates that perhaps the undue burden — abortion providers being required to gain admitting privileges at local hospitals — could simply be met, when of course providers have already been trying to gain admitting privileges for years. The Court ultimately blocked the implementation of the law, but only because the conservative Chief Justice, John Roberts, voted with the liberals. The margin of safety has grown vanishingly thin.

Let’s consider what that means. If Roe were overturned, it wouldn’t necessarily make it impossible for a pregnant person to obtain an abortion, but it would potentially make an already challenging process even more daunting. As it stands, obtaining an abortion is already far from affordable or convenient for many women, even in blue states with a plethora of clinics. Despite Roe’s current status, and despite the fact that statistically, most Americans believe in a woman’s right to choose, abortion care is still often portrayed as a privilege instead of a right — or as a miserable “worst-case” scenario rather than a straightforward medical procedure. Read more…

Atlantic City Is Really Going Down This Time

Illustration by Matt Chinworth

Rebecca McCarthy | Longreads | February 2019 | 14 minutes (3,579 words)

Atlantic City covers the northern third of Absecon Island, a barrier island made up of an alarming amount of sand. It is a bad town to die in — there are plenty of vacant lots but no cemeteries. In many places, if you dig down more than eight feet you hit water. A couple blocks away from the beach, the Absecon Lighthouse is built on a submerged wooden foundation for exactly that reason — so long as you keep wood wet and away from oxygen, it won’t rot. “We haven’t tipped yet,” said Buddy Grover, the 91-year-old lighthouse keeper, “but it does sway in the wind sometimes.”

“The problem with barrier islands is that, sort of by definition, they move,” said Dan Heneghan. Heneghan covered the casino beat for the Press of Atlantic City for 20 years before moving to the Casino Control Commission in 1996. He retired this past May. He’s a big, friendly guy with a mustache like a push broom and a habit of lowering his voice and pausing near the end of his sentences, as if he’s telling you a ghost story. (“Atlantic City was, in mob parlance … a wide open city. No one family … controlled it.”) We were standing at the base of the lighthouse, which he clearly adores. He’s climbed it 71 times this year. “I don’t volunteer here, I just climb the steps,” he said. “It’s a lot more interesting than spending time on a Stairmaster.” The lighthouse was designed by George Meade, a Civil War general most famous for defeating Robert E. Lee at the Battle of Gettysburg. It opened in 1857 but within 20 years the beach had eroded to such an extent that the water was only 75 feet away from the base. Jetties were added until the beach was built back out, but a large iron anchor sits at the old waterline, either as a reminder or a threat.

A little more than two years ago, when I was an intern at a now shuttered website called The Awl, I went out to Atlantic City to cover the Trump Taj Mahal’s last weekend before it closed for good. My first night there I met a woman named Juliana Lykins who told me about Tucker’s Island — New Jersey’s first seaside resort, which had been slowly overtaken by the sea until it disappeared completely. This was a month before the election. The “grab ’em by the pussy” tape had just broken, it was pouring rain, the city was on the verge of defaulting on its debts, and 2,000 casino workers were about to lose their jobs. At the time — my clothes soaking wet, falling asleep in a Super 8 to the sound of Scottie Nell Hughes on CNN — it was hard to understand what Lykins was saying as anything other than a metaphor for the country. I missed the larger menace and focused on the immediate. Trump was elected obviously, but Tucker’s Island wasn’t a figurative threat; it was a very straightforward story about what happens to coastal communities when the water moves in. Read more…

Longreads Best of 2018: All of Our No. 1 Story Picks

All through December, we’ll be featuring Longreads’ Best of 2018. Here’s a list of every story that was chosen as No. 1 in our weekly Top 5 email.

If you like these, you can sign up to receive our weekly email every Friday. Read more…

The Second Half of Watergate Was Bigger, Worse, and Forgotten By the Public

Bettmann / Getty

David Montero | an excerpt adapted from Kickback: Exposing the Global Corporate Bribery Network | Viking | November 2018 | 16 minutes (4,298 words)

In 1975, Peter Clark was a young attorney in the Enforcement Division of the U.S. Securities and Exchange Commission. Founded three years earlier, the Enforcement Division was tasked with investigating possible violations of federal securities laws. One morning, Clark was in his office when the division’s director, Stanley Sporkin, appeared, greatly vexed. Sporkin, tall and corpulent with deep-set eyes, was waving a newspaper, Clark recalled. “How the ‘bleep’ could a publicly held company have a slush fund?” Sporkin asked.

Two years had passed since the Watergate scandal broke, and less than a year since President Nixon had resigned, but the reverberations of the scandal were still rocking Washington. Its revelation that multinational corporations, including some of the most prestigious brands in the United States, had been making illegal contributions to political parties not only at home but in foreign countries around the world would later be described by Ray Garrett, the chairman of the SEC, as “the second half of Watergate, and by far the larger half.” Read more…