In 1989, Ruben Castaneda was an ambitious young reporter at the Washington Post, covering the downfall of then-Mayor Marion Barry. And like Barry, Castaneda also had a double life.
Eleven years ago, one of Washington’s most tradition-bound companies placed a bet that would transform its fortunes. The wager, by The Washington Post Co. and its Kaplan division, took the form of a $165 million purchase of an Atlanta-based chain of for-profit vocational schools that catered to low-income students. The bet was big — the price equal to the profits earned that year by The Post Co.’s print-media pillars: this newspaper and Newsweek magazine. So was the payoff. But what proved a deftly timed business move brought other, less welcome scrutiny to a family-run company that had long prided itself in serving the public interest.
Erica Armstrong Dunbar | Never Caught: The Washingtons’ Relentless Pursuit of Their Runaway Slave Ona Judge | Atria / 37 Ink | March 2017 | 19 minutes (5,244 words)
Two years after the death of her owner, Betty learned her mistress was to remarry. She most likely received the news of her mistress’s impending second marriage with great wariness as word spread that Martha Custis’s intended was Colonel George Washington. The colonel was a fairly prominent landowner with a respectable career as a military officer and an elected member of the Virginia House of Burgesses. His marriage to the widowed Martha Custis would offer him instant wealth and the stability of a wife and family that had eluded him.
A huge yet necessary transition awaited Martha Custis as she prepared to marry and move to the Mount Vernon estate, nearly one hundred miles away. For Betty, as well as the hundreds of other slaves that belonged to the Custis estate, the death of their previous owner and Martha’s marriage to George Washington was a reminder of their vulnerability. It was often after the death of an owner that slaves were sold to remedy the debts held by an estate. Read more…
To cover this past weekend’s inauguration and Women’s March protests in Washington, D.C., Longreads teamed up with Seattle publication The Stranger. Armed with mood rings supplied by their editors, writers Sydney Brownstone and Heidi Groover, along with photographer Nate Gowdy, met those celebrating and protesting, shared their personal perspectives, and examined what it means for the next four years. Here’s their full diary from the events of January 18-23.
L. A. Kauffman | Direct Action: Protest and the Reinvention of American Radicalism | Verso Books | February 2017 | 33 minutes (8,883 words)
* * *
If the government won’t stop the war, we’ll stop the government.
The largest and most audacious direct action in US history is also among the least remembered, a protest that has slipped into deep historical obscurity. It was a protest against the Vietnam War, but it wasn’t part of the storied sixties, having taken place in 1971, a year of nationwide but largely unchronicled ferment. To many, infighting, violence, and police repression had effectively destroyed “the movement” two years earlier in 1969.
That year, Students for a Democratic Society (SDS), the totemic organization of the white New Left, had disintegrated into dogmatic and squabbling factions; the Black Panther Party, meanwhile, had been so thoroughly infiltrated and targeted by law enforcement that factionalism and paranoia had come to eclipse its expansive program of revolutionary nationalism. But the war had certainly not ended, and neither had the underlying economic and racial injustices that organizers had sought to address across a long decade of protest politics. If anything, the recent flourishing of heterodox new radicalisms—from the women’s and gay liberation movements to radical ecology to militant Native American, Chicano, Puerto Rican, and Asian-American movements—had given those who dreamed of a world free of war and oppression a sobering new awareness of the range and scale of the challenges they faced.
On May 3, 1971, after nearly two weeks of intense antiwar protest in Washington, DC, ranging from a half-million-person march to large-scale sit-ins outside the Selective Service, Justice Department, and other government agencies, some 25,000 young people set out to do something brash and extraordinary: disrupt the basic functioning of the federal government through nonviolent action. They called themselves the Mayday Tribe, and their slogan was as succinct as it was ambitious: “If the government won’t stop the war, we’ll stop the government.” The slogan was of course hyperbolic— even if Washington, DC were completely paralyzed by protest for a day or week or a month, that would not halt the collection of taxes, the delivery of mail, the dropping of bombs, or countless other government functions—but that made it no less electrifying as a rallying cry, and no less alarming to the Nixon administration (Nixon’s White House chief of staff, H.R. Haldeman, called it “potentially a real threat”). An elaborate tactical manual distributed in advance detailed twenty-one key bridges and traffic circles for protesters to block nonviolently, with stalled vehicles, improvised barricades, or their bodies. The immediate goal was to snarl traffic so completely that government employees could not get to their jobs. The larger objective was “to create the spectre of social chaos while maintaining the support or at least toleration of the broad masses of American people.”
The protest certainly interfered with business as usual in Washington: traffic was snarled, and many government employees stayed home. Others commuted to their offices before dawn, and three members of Congress even resorted to canoeing across the Potomac to get themselves to Capitol Hill. But most of the planned blockades held only briefly, if at all, because most of the protesters were arrested before they even got into position. Thanks to the detailed tactical manual, the authorities knew exactly where protesters would be deployed. To stop them from paralyzing the city, the Nixon Administration had made the unprecedented decision to sweep them all up, using not just police but actual military forces.
Under direct presidential orders, Attorney General John Mitchell mobilized the National Guard and thousands of troops from the Army and the Marines to join the Washington, DC police in rounding up everyone suspected of participating in the protest. As one protester noted, “Anyone and everyone who looked at all freaky was scooped up off the street.” A staggering number of people— more than 7,000—were locked up before the day was over, in what remain the largest mass arrests in US history. Read more…
David Reid | The Brazen Age: New York City and the American Empire: Politics, Art, and Bohemia | Pantheon | March 2016 | 31 minutes (8,514 words)
The excerpt below is adapted from The Brazen Age, by David Reid, which examines the “extraordinarily rich culture and turbulent politics of New York City between the years 1945 and 1950.” This story is recommended by Longreads contributing editor Dana Snitzky.
* * *
Probably I was in the war.
—NORMAN MAILER, Barbary Shore (1951)
A hideous, inhuman city. But I know that one changes one’s mind.
In march 1946 the young French novelist and journalist Albert Camus traveled by freighter from Le Havre to New York, arriving in the first week of spring. Le Havre, the old port city at the mouth of the Seine, had almost been destroyed in a battle between its German occupiers and a British warship during the Normandy invasion; huge ruins ringed the harbor. In his travel journal Camus writes: “My last image of France is of destroyed buildings at the very edge of a wounded earth.”
At the age of thirty-two this Algerian Frenchman, who had been supporting himself with odd jobs when the war began, was about to become very famous. By 1948, he would become an international culture hero: author of The Stranger and The Plague, two of the most famous novels to come out of France in the forties, and of the lofty and astringent essays collected in The Myth of Sisyphus.
Camus’s visit to the United States, sponsored by the French Ministry of Foreign Affairs but involving no official duties, was timed to coincide with Alfred A. Knopf’s publication of The Stranger in a translation by Stuart Gilbert, the annotator of James Joyce’s Ulysses. In the spring of 1946 France was exporting little to the United States except literature. Even most American readers with a particular interest in France knew of Camus, if at all, as a distant legend, editor of the Resistance newspaper Combat and an “existentialist.”
Reviewing The Stranger in the New Yorker, Edmund Wilson, usually omniscient, confessed that he knew absolutely nothing about existentialism except that it was enjoying a “furious vogue.” If there were rumored to be philosophical depths in this novel about the motiveless murder of an Arab on a North African beach, they frankly eluded him. For Wilson the book was nothing more than “a fairly clever feat”—the sort of thing that a skillful Hemingway imitator like James M. Cain had done as well or better in The Postman Always Rings Twice. America’s most admired literary critic also had his doubts about Franz Kafka, the writer of the moment, suspecting that the claims being made for the late Prague fabulist were exaggerated. But still, like almost everyone else, especially the young, in New York’s intellectual circles Wilson was intensely curious about what had been written and thought in occupied Europe, especially in France.
“Our generation had been brought up on the remembrance of the 1920s as the great golden age of the avant-garde, whose focal point had been Paris,” William Barrett writes in The Truants, his memoir of the New York intellectuals. “We expected history to repeat itself: as it had been after the First, so it would be after the Second World War.” The glamorous rumor of existentialism seemed to vindicate their expectations. Camus’s arrival was eagerly awaited not only by Partisan Review but also by the New Yorker, which put him in “The Talk of the Town,” and Vogue, which decided that his saturnine good looks resembled Humphrey Bogart’s. Read more…
In “A Room of One’s Own,” Virginia Woolf writes:
Women have served all these centuries as looking glasses possessing the magic and delicious power of reflecting the figure of man at twice its natural size.
The Washington Post announced Monday they are launching a new “for women by women” website called The Lily, named for the first newspaper “devoted to the interests of women.”
It’s no secret that journalism has long been, and continues to be, far more closed off to women than to men. A now-retired female investigative journalist once told me that when she was working at the New York Times in the 1970s and 80s, if she collaborated with a man on a story, the story could only be double-bylined if it ran on the front page. Otherwise, her name would be dropped, as editors felt a man needed the byline more than she did.
In her post introducing The Lily, editor-in-chief Amy King acknowledged that “these days, publications for women are not so novel.” She’s right: See Jezebel, New York Magazine‘s The Cut, Racked, Bustle, Broadly, The Establishment, as well as the the newsstand stalwarts Vogue, Elle, Cosmopolitan, Marie Claire, and Glamour — many of which now go beyond the realm of beauty and fashion. The history of specific sections for “women’s interest” is grounded in revenue-grabbing, as Jacqui Shine noted for the Awl in 2014 in her epic history of newspapers’ style sections.
Joseph Pulitzer is credited with developing women’s news largely as a means of attracting new readers and, in turn, new advertisers…
Ishbel Ross, a reporter who also wrote the first history of women in journalism, said that Pulitzer’s women, like everyone in his newsroom, were expected to ‘get a good story or die.’ What’s more, they also ‘had to show their feelings in their reporting.’
Even the most successful women working in journalism had to write to these conventions, fashioning themselves as stunt girls or sob sisters.
As Shine notes, female journalists are expected to not only get a good story, but bare their own inner emotional workings to, in Woolf’s words, reflect “the figure of man at twice its natural size.”
On Twitter, the economics reporter for the Houston Chronicle published a thread questioning the Washington Post‘s decision to compartmentalize reporting on women’s issues, as well as its sponsor, JP Morgan Chase.
She was not alone.
Still, others argued that as women in journalism remain sidelined, specific outlets for and by them are helpful. Particularly at a time when even our lawmakers are inclined toward misogynist internet usage, amplifying women’s voices and journalism in the interest of women can only be positive.
In a Poynter profile, Lily editor-in-chief King argued that her critics are mistaken. The Lily, she says, is “not a women’s page or section or vertical in the traditional sense. Instead, it’s an attempt to take the news the Post produces and repackage it for a different audience on distributed platforms.” King’s explanation seems to frame The Lily more as an exercise in finding sustainable business models for journalism, similarly to how Joseph Pulitzer saw journalism by and for women as an economic opportunity. Poynter writes that King hopes The Lily “will offer lessons for how the Post can reach other demographics that it’s not currently reaching.”
Two articles published by the Washington Post and the New York Times this weekend focused on extremely different versions of the U.S. healthcare system: The Post feature— part of a series on “Disabled America,” which focuses on rural populations receiving federal disability checks — bears the dateline of Pemiscot County, Missouri, a place where the dwindling population has an unemployment rate of eight percent. The Times’ feature is part of the series “The Velvet Rope Economy,” which focuses on “how growing disparities in wealth are leading to privileged treatment of the rich.” Nelson Schwartz reports from San Francisco, currently the second-most densely populated major city after New York, with the third-highest median household income. Known for being plagued by homelessness, the poverty rate is 12 percent, lower than the national average, and the unemployment rate is 2.6 percent.
In the Post, Terence McCoy reports on a multi-generational family on disability that struggles to make ends meet in “a county of endless farmland, where the poverty rate is more than twice the national figure, life expectancy is seven years shorter than the national average and the disability rate is nearly three times what it is nationally.”
McCoy offers up a host of statistics gleaned from his own analyses of federal data and interviews both with rural residents and with professionals like social workers, lawyers, school officials and academics. An average of 9.1 percent of working-age people are on disability in rural areas, nearly twice the urban rate and 40 percent higher than the national average. The rate spikes in areas from Appalachia to the Deep South and into Missouri, dubbed “disability belts” by economists, and is highest in 102 counties within those belts, where McCoy estimates a minimum of one in six working-age residents are on disability.
“Multigenerational disability, the Post found, is far more common in poor families,” writes McCoy, gesturing at the difficulty American families face in attempting to climb out of poverty. When the family he follows loses the disabled status of their youngest members, they must convince the government to reinstate it in order to “climb from crushing poverty into manageable poverty.”
Meanwhile, in San Francisco, reporter Nelson Schwarz goes behind the scenes of boutique medical services with “concierge doctors” who target Silicon Valley’s millionaires and billionaires with five-figure annual fees that afford “a chance to cut the line and receive the best treatment.” (As one doctor says, “this is cheaper than the annual gardener’s bill at your mansion.”)
Both patients and doctors express some misgivings: One patient professes “guilt over what he admits is very special treatment” and physicians are “quick to admit they struggle with the ethical issues of providing elite treatment for a wealthy few, even as tens of millions of American struggle to afford basic care.”
Several of the doctors who joined these boutique enterprises say they wish they could have afforded patients at their old practices “the time and energy” they have for their new, ultra-wealthy patients, but none explain why they couldn’t. The insinuation is either pressure from insurance companies or a need to squeeze in as many patients as possible, or a combination of both, but it’s never made explicit.
Catering to the wealthy isn’t just for individual doctors — hospitals are also on board with putting up a velvet rope. Stanford recently committed to a $2 billion wing designed by “star architect” Rafael Viñoly, which features a rooftop garden and a glass-paneled atrium topped with a 65-foot dome. (Until then, red blankets are given to benefactors when they check in for treatment, so anyone who stops by their room knows their status.)
One doctor balked at the notion that healthcare should not be a polarized system. “Whenever I bump into a bleeding-heart liberal, which I am, I mention that schools, housing and food are all tiered systems,” he said. “But healthcare is an island of socialism in a system of tiered capitalism? Tell me how that works.”
“In my old waiting room in Seattle, the C.E.O. of a company might be sitting next to a custodian from that company,” he recalled. “While I admired that egalitarian aspect of medicine, it started to appear somewhat odd. Why would people who have all their other affairs in order — legal, financial, even groundskeepers — settle for a 15-minute slot?”
The Times rejoins, “It’s a fair question.”
But is it? Or is healthcare — like food and housing — a basic human right that should be afforded to everyone, whether they are a CEO or a custodian? A hospital executive argues that courting the ultra-wealthy allows them to provide care to the less wealthy “as reimbursements from private insurers and the federal government shrink.”
In the Washington Post story, the family matriarch has taken to diagnosing illnesses based on her own research, convinced her hyperactive twin grandsons have a slew of disorders. Her daughter lives a life in which “for as long as she could remember, what she couldn’t do had defined her far more than what she could,” and the only medical professional who appears in their life is a therapist “who drives all over the county counseling distressed families.” It doesn’t seem like any substantive care is getting to the people who need it most.