Search Results for: Silicon-Valley

How Southern Cities Are Joining the Knowledge Economy

Ramil Sitdikov/Sputnik via AP

As manufacturing and agriculture declined in the South, once thriving operations have left many empty factories and and new opportunities in their wake. Now startups are taking them over and revitalizing small cities in the process.

At Bloomburg Businessweek, Craig Torres and Catarina Saraiva describe how Greenville, South Carolina has managed to attract highly skilled workers and revive its downtown by building its tech economy. Greenville has a network of investors, a culture of risk-taking, proximity to a research university, and has long made itself attractive to educated college graduates. They have to in order to compete with big tech employers in big cities on both coasts. The question other small Southern cities are asking is whether they can replicate Greenville’s success.

“We tell these communities, ‘Don’t worry: The entrepreneurs are going to put you back to work,’” says Edward Conard, who led Bain Capital’s industrial group and is now an adjunct fellow at the American Enterprise Institute, a Washington think tank. “But they aren’t coming.” Regional economies “are falling further off the technological frontier because companies like Google and Facebook are going to scarf that talent up.”

Greenville has managed to buck the trend. It had almost 5 young businesses per 1,000 people in 2014, the latest year for which data are available—close to the national total of 5.2, Boston’s 5.5, and Chicago’s 5.6. Danville’s total, meanwhile, was 3 per 1,000 people. (None of these cities has been immune to the overall decline in U.S. startups since the 2007-09 recession.)

While pundits focus on the importance of upgrading workforce skills, kick-starting a cycle of wealth-building by attracting and retaining new businesses is a multipronged effort.

Read the story

The Top 5 Longreads of the Week

A sign that reads "Unnecessary Noise Prohibited'
(Visions of America/UIG via Getty Images)

This week, we’re sharing stories from Sloane Crosley, Jason Fagone, Bronwen Dickey, Heather Radke, and Kelly Conaboy.

Sign up to receive this list free every Friday in your inbox. Read more…

Seeking a Roadmap for the New American Middle Class

The next American middle class
Illustration by Zoë van Dijk

Livia Gershon | Longreads | March 2018 | 8 minutes (1,950 words)

Over the past few months, Starbucks, CVS, and Walmart announced higher wages and a range of other benefits like paid parental leave and stock options. Despite what the brands say in their press releases, the changes probably had little to do with the Republican corporate tax cuts, but they do reflect a broader economic prosperity, complete with a tightening a labor market. In the past couple of years, real wages hit their highest levels ever, and even the lowest-paid workers started getting raises. As Matt Yglesias wrote at Vox, “for the first time in a long time, the underlying labor market is really healthy.”

But it doesn’t feel that way, does it? From the new college graduate facing an unstable contract job and mounds of debt to the 30-year-old in Detroit picking up an extra shift delivering pizzas this weekend, it just seems like we’re missing something we used to have.

In a 2016 Conference Board survey, only 50.8 percent of U.S. workers said they were satisfied with their jobs, compared with 61 percent in 1987 when the survey was first done. In fact, job satisfaction hasn’t come close to that first reading in this century. We’re also more anxious and depressed today than we’ve been since the depths of the recession, and we’re dying younger — particularly if we’re poor.

So maybe this is a good moment to stop and think about what really good economic news would look like for American workers. Imagine for a moment that everything goes right. The long, slow recovery from the Great Recession continues, rather than reversing itself and plunging us back into high unemployment. Increased automation doesn’t displace a million truck drivers but creates new, more skilled driving jobs. The retirement of the Baby Boomers reduces labor supply, driving up wages at nursing homes, call centers, and the rest of the gigantic portion of the economy where pay is low.

Would this restore dignity to work and a sense of optimism to the nation? Would it bring back the kind of pride we associate with the 1950s GM line worker?

I don’t think it would. I think it would take far more fundamental changes to win justice for American workers. But I also think it’s possible to strive for something way better than the postwar era we often remember as a Golden Age for workers.

Let’s start by dispelling the idea that postwar advances for American workers were some kind of natural inevitability that could never be replicated today. Yes, in the 1940s, the United States was in a commanding position of economic dominance over potential rivals decimated by war. And yes, companies were able to translate the manufacturing capacity and technological know-how built up through the military into astounding new bounty for consumers. But, when it comes to profitability, business has also had plenty of boom times in recent decades, with no parallel advances for workers.

This is the moment to stop and think about what really good economic news would look like for American workers.

Let’s also set aside the nostalgia about how we used to make shit in this country. Page through Working, Studs Terkel’s classic 1972 book of interviews with a broad range of workers, and factories come across as a kind of hellscape. A spot welder at a Ford plant in Chicago describes standing in one place all day, with constant noise too loud to yell over, suffering frequent burns and blood poisoning from a broken drill, at risk of being fired if he leaves the line to use the bathroom. “Repetition is such that, if you were to think about the job itself, you’d slowly go out of your mind,” he told Terkel.

The stable, routine corporate office work that also thrived in the postwar era certainly wasn’t as unpleasant as that, but there’s a whole world of cultural figures, from Willy Loman to Michael Scott, that suggest it was never an inherent font of meaning.

The fact that the Golden Age brought greater wealth, pride, and status to American workers, both blue- and white-collar, wasn’t really about the booming economy or the nature of the work. It was a result of power politics and deliberate decisions. In the 1930s and ‘40s, unionized workers, having spent decades battling for power on the job, at severe risk to life and livelihood, were a powerful force. And CEOs of massive corporations like General Motors were scared enough of radical workers, and hopeful enough about the prospects of shared prosperity, to strike some deals.

A consensus about how jobs ought to work emerged from these years. Employers would provide decent pay, health insurance, and pensions for large swaths of the country’s workers. The federal government would build a legal framework to address labor disputes and keep corporate monopolies from getting out of control. Politicians from both parties would march in the Labor Day parade every year, and workers would get their fair share of the new American prosperity.

Today, of course, the postwar consensus has broken down. Even if average workers are making more money than we used to, the gap between average and super-rich makes us feel like we’re getting nowhere. We may be able to afford iPhones and big-screen TVs, but we’ve got minimal chances of getting our kids into the elite colleges that define the narrow road to success.

And elite shows of respect for workers ring more and more hollow. Unions, having drastically declined in membership, no longer have a seat at some of the tables they used to. Politicians celebrate businesses’ creation of jobs, not workers’ accomplishment of necessary and useful labor. A lot of today’s masters of industry clearly believe that workers are an afterthought, since robots will soon be able to do anyone’s jobs except theirs.

But let’s not get too nostalgic about the Golden Age. As many readers who are not white men may be shouting at me by this point, there was another side to these mid-century ideas about work. The entire ideological framework defining a job with dignity was inextricably tied up with race and gender.

From the start of the industrial revolution, employers used racism to divide workers. And union calls for respect and higher wages were often inseparable from demands that companies hire only white men. The Golden Age didn’t just provide white, male workers with higher wages than everyone else but also what W.E.B. Du Bois called the “public and psychological wage” of a sense of racial superiority.

Just as importantly, white men in the boom years also won stay-at-home wives. With rising male wages, many white women — and a much smaller number of women of other races — could now focus all their energy on caring for home and family. For the women, that meant escape from working at a mill or cooking meals and doing laundry for strangers. But it also meant greater economic dependence on their husbands. For the men, it was another boost to their living standard and status.

Golden Age corporate policies, union priorities, and laws didn’t create the ideal of the white, breadwinner-headed family, but they did reinforce it. Social Security offered benefits to workers and their dependents rather than to all citizens, and excluded agricultural and domestic workers, who were disproportionately black. The GI Bill helped black men far less than white ones and left out most women except to the extent that their husbands’ benefits trickled down to them.

Let’s also set aside the nostalgia about how we used to make shit in this country.

Today, aside from growing income inequality, unstable jobs, and the ever-skyward climb of housing and education costs, a part of the pain white, male workers are feeling is the loss of their unquestioned sense of superiority.

So, can we imagine a future Golden Age? Is there a way to make working for Starbucks fulfill all of us the way we remember line work at GM fulfilling white men? Maybe. With an incredible force of political will, it might be possible to rejigger the economy so that modern jobs keep getting better. It would start with attacking income inequality head-on. The government could bust up monopolistic tech giants, encourage profit-sharing, and maybe even take a step toward redistributing inherited wealth. We’d also need massive social change to ensure people of color and women equal access to the good new jobs, and men and white people would need to learn to live with a loss of the particular psychological wages of masculinity and whiteness.

But even all that would still fail to address one thing that made work in the Golden Age fulfilling for men: the wives. Stay-at-home moms of the mid-twentieth century weren’t just a handy status symbol for their men. They were household managers and caregivers, shouldering the vast majority of child-raising labor and creating a space where male workers could rest and be served. And supporting a family was a key ingredient that made otherwise draining, demeaning jobs into a source of meaning.

Few men or women see a return to that ideal as a good idea today. But try imagining what good, full-time work for everyone looks like without it. Feminist scholar Nancy Fraser describes that vision as the Universal Breadwinner model — well-paid jobs, with all the pride and status that come with them, for all men and women. She notes that it would take massive spending to outsource childcare and other traditionally unpaid “female” work — particularly since those jobs would need to be good jobs too. It would also leave out people with personal responsibilities that they couldn’t, or wouldn’t, hand over to strangers, as well as many with serious disabilities. And it certainly wouldn’t solve the problem many mothers and fathers report today of having too little time to spend with family.

A really universal solution to the problem of bad jobs would have to go beyond “good jobs” in the Golden Age model. It would be a world where we can take pride in our well-paid jobs at Starbucks without making them the center of our identities. That could mean many more part-time jobs with flexible hours, good pay, and room for advancement. It could mean decoupling benefits like health care and retirement earnings from employment and providing a hefty child allowance. Certainly, it would mean a social and psychological transformation that lets both men and women see caring work, and other things outside paid employment, as fully as valuable and meaningful as a job.

As a bonus, this kind of solution would also make sense when we do fall back into recession, or if the robots do finally come for a big chunk of our jobs.

All this might sound absurdly utopian. We are, after all, living in a world where celebrity business leaders claim to work 80-plus hour weeks while politicians enthusiastically deny health care to people who can’t work.

But the postwar economy didn’t happen on its own. It was the product of a brutal, decades-long fight led by workers with an inspiring, flawed vision. And today, despite everything, new possibilities are emerging. Single-payer health care is a popular idea, and “socialism” has rapidly swung from a slur to a legitimate part of the political spectrum. Self-help books like The 4-Hour Work Week — which posit the possibility of a radically different work-life balance, albeit based on individual moxie rather than social change — have become a popular genre. Young, black organizers in cities across the country are developing their own cooperative economic models. And if there’s any positive lesson we can take from the current political moment, it’s that you never know what could happen in America. Maybe a new Golden Age is possible. It’s at least worth taking some time to think about how we would want it to look.

***

Livia Gershon is a freelance journalist based in New Hampshire. She has written for the Guardian, the Boston Globe, HuffPost, Aeon and other places.

 

To Live and Die in Utopian New Zealand

Kai Schwörer/picture-alliance/dpa/AP Images

If the world is going to end with a whimper not a bang, then some of the world’s richest people are going to whimper together in New Zealand. People like Trump supporter and PayPal co-founder Peter Thiel have staked their claim on the small island nation as the ideal refuge during a global apocalypse. The narcissism involved in such survivalism is staggering but par for the course; capitalists like Thiel have destabilized our world economically and ecologically, and now they want to buy their way out of the destructive ramifications in someone else’s country. New Zealand has strict rules around citizenship and land ownership for foreign nationals, which is why Thiel’s secretive acquisition of piece of the South Island raised many concerns.

Curious himself, Irish writer Mark O’Connell searched for answers for The Guardian, to understand the peculiar ideological appeal of New Zealand to a certain class of Silicon Valley elite. What he found while touring the New Zealand countryside is unnerving and infuriating, particularly the way many billionaires plan to profit on global catastrophe, and the possibility that Thiel intends to turn New Zealand into his own country after its collapse. Fortunately, O’Connell sees the dark comedy of it all. He calls Thiel “a canary in capitalism’s coal mine who also happens to have profited lavishly from his stake in the mining concern itself.” He shows how Thiel is “a caricature of outsized villainy,” and “a human emblem of the moral vortex at the centre of the market.”

The Kiwis I spoke with were uncomfortably aware of what Thiel’s interest in their country represented, of how it seemed to figure more generally in the frontier fantasies of American libertarians. Max Harris – the author of The New Zealand Project, the book that informed the game-sculptures on the upper level of The Founder’s Paradox – pointed out that, for much of its history, the country tended to be viewed as a kind of political Petri dish (it was, for instance, the first nation to recognise women’s right to vote), and that this “perhaps makes Silicon Valley types think it’s a kind of blank canvas to splash ideas on”.

When we met in her office at the Auckland University of Technology, the legal scholar Khylee Quince insisted that any invocation of New Zealand as a utopia was a “giant red flag”, particularly to Māori like herself. “That is the language of emptiness and isolation that was always used about New Zealand during colonial times,” she said. And it was always, she stressed, a narrative that erased the presence of those who were already here: her own Māori ancestors. The first major colonial encounter for Māori in the 19th century was not with representatives of the British crown, she pointed out, but with private enterprise. The New Zealand Company was a private firm founded by a convicted English child kidnapper named Edward Gibbon Wakefield, with the aim of attracting wealthy investors with an abundant supply of inexpensive labour – migrant workers who could not themselves afford to buy land in the new colony, but who would travel there in the hope of eventually saving enough wages to buy in. The company embarked on a series of expeditions in the 1820s and 30s; it was only when the firm started drawing up plans to formally colonise New Zealand, and to set up a government of its own devising, that the British colonial office advised the crown to take steps to establish a formal colony. In the utopian fantasies of techno-libertarians like Thiel, Quince saw an echo of that period of her country’s history. “Business,” she said, “got here first.”

Read the story

The Top 5 Longreads of the Week

Aerial view of the Golden Gate Bridge in dark fog
Tayfun Coskun / Anadolu Agency / Getty Images

This week, we’re sharing stories from Emily Chang, Kiera Feldman, Motoko Rich, David J. Unger, and Nicole Chung.

Sign up to receive this list free every Friday in your inbox. Read more…

The More We Disrupt, The More Things Are Exactly The Same

Photo by Daniel Benavides (CC BY 2.0), viaWikimedia Commons

Vanity Fair published an excerpt from Emily Chang‘s forthcoming book, Brotopia: Breaking Up the Boys’ Club of Silicon Valley. TL;DR: they have a lot of sex and drug parties at which they disrupt conventional morality by… replicating conventional sexist, heteronormative behaviors.

They don’t necessarily see themselves as predatory. When they look in the mirror, they see individuals setting a new paradigm of behavior by pushing the boundaries of social mores and values. “What’s making this possible is the same progressiveness and open-mindedness that allows us to be creative and disruptive about ideas,” Founder X told me. When I asked him about Jane Doe’s experience, he said, “This is a private party where powerful people want to get together and there are a lot of women and a lot of people who are fucked up. At any party, there can be a situation where people cross the line. Somebody fucked up, somebody crossed the line, but that’s not an indictment on the cuddle puddle; that’s an indictment on crossing the line. Doesn’t that happen everywhere?” It’s worth asking, however, if these sexual adventurers are so progressive, why do these parties seem to lean so heavily toward male-heterosexual fantasies? Women are often expected to be involved in threesomes that include other women; male gay and bisexual behavior is conspicuously absent. “Oddly, it’s completely unthinkable that guys would be bisexual or curious,” says one V.C. who attends and is married (I’ll call him Married V.C.). “It’s a total double standard.” In other words, at these parties men don’t make out with other men. And, outside of the new types of drugs, these stories might have come out of the Playboy Mansion circa 1972.

Be forewarned, these grown adult people liberally use the phrases “cuddle puddle” and “founder hounder,” and you’ll want to budget some time to scream into a pillow and then take a shower after you finish reading.

Read the excerpt

Longreads Best of 2017: Science, Technology, and Business Writing

We asked writers and editors to choose some of their favorite stories of the year in various categories. Here is the best in science, tech, and business writing.

Deborah Blum
Director of the Knight Science Journalism program at MIT and author of The Poisoner’s Handbook

The Touch of Madness (David Dobbs, Pacific Standard)

A beautifully rendered exploration of the slow, relentless creep of schizophrenia into the life of a brilliant graduate student, her slow recognition of the fact, and the failure of her academic community to recognize the issue or to support her. Dobbs’ piece functions both as an inquiry into our faltering understanding of mental illness and our cultural failure to respond to it with integrity. It’s the kind of compassionate and morally-centered journalism we should all aspire to.


Elmo Keep
Australian writer and journalist living in Mexico, runner-up for the 2017 Bragg Prize for Science Writing

How Eclipse Chasers Are Putting a Small Kentucky Town on the Map (Lucas Reilly, Mental Floss)

Anyone willing to write about syzygy in the shadow of Annie Dillard’s classic 1982 essay “Total Eclipse” has balls for miles. Reilly’s decision to focus on the logistics faced by tiny towns preparing to be inundated by thousands of eclipse watchers was inspired. It brilliantly conveyed the shared enthusiasms that celestial events animate in us. Between these two essays, I’m convinced a total eclipse would be a psychic event so overwhelming I might not survive it. I’ve got 2037 in Antarctica on my bucket list — if it’s still there in twenty years.    Read more…

Where Do We Go From Here?

Donald Bowers / Getty Images for The Weinstein Company

Felling a man of Harvey Weinstein’s stature was undoubtedly going to create aftershocks. It must help that the actresses coming forward with accusations against him are famous, people we recognize, people we believe we love even if we don’t actually know them. It helps us to care about them and, as female crew members afraid to come forward about their own abuse told The Hollywood Reporter, it helps the actresses:

“We don’t have the power that Rose McGowan or Angelina Jolie has,” says one female below-the-liner, and others agree that it is a lot easier for a production to replace a woman on the crew than it is to lose a bankable actor or director.

The female crew members told THR they’re afraid to come forward, lest a producer deem them “a liability” or “a troublemaker.” It’s not the men who abuse that are liabilities, it’s the women who would be so inconvenient as to not shut up and take it. One crew member says what many of us know about human resources departments: “Human resources is not there for us; it’s there for the company. To protect it from a liability.” Again, here, the liability is the person who tells the truth, not the person who behaves wrongly.

Still, since the New York Times and the New Yorker published their Weinstein exposés, less famous women have revealed abuse by powerful men. Men have followed with apologies. (The best one came from Ryan Gosling, who said he was disappointed in himself for not knowing about Weinstein’s treatment of women sooner — we’ll come back to this.) Kim Masters was finally able to get an outlet to publish a piece she’d been doggedly working on for months, in which a producer on the Amazon show The Man in the High Castle came forward to report harassment by a top Amazon executive, who has since resigned.

The #MeToo campaign on social media — originally created by a black woman activist, Tarana Burke, 10 years ago and popularized in the wake of Weinstein by actress Alyssa Milano and others — brought out even more stories beyond the entertainment industry. The #MeToo campaign also seems to have been eye-opening for a lot of men. Maybe you think we should be pleased about this, but I feel more like Alexandra Petri, who wrote in the Washington Post, “I am sick of having to suffer so that a man can grow.”

I received a late-night email this week from someone who crossed a line with me 13 years ago. He wrote that he “struggled for a while tonight” with the email, which made me laugh, that he thought I should care that he “struggled” for a few hours that night, after 13 years. But of course he thought that. His whole email was about him. He wasn’t sure if he had done anything wrong, but thought maybe he had. He appeared to not remember that 10 years ago, I had written him an email of my own, telling him how his violation had hurt me. He had dismissed it then, telling me — a college student who had worked up a tremendous amount of courage to even write him that email — that I was overreacting. Hysterical woman, your feelings are incorrect. He wants forgiveness now, but can’t be bothered to go through his email and see that I told him, a decade ago, exactly what he did wrong and how it hurt me.

Read more…

Immature Architects Built the Attention Economy

SMKR / Barcroft USA / Barcoft Media via Getty Images

A cadre of young technologists at Google, Twitter, and Facebook admit it: they didn’t think making smartphones addictive would make smartphones this addictive. Come to think of it, any negative consequences of the persuasive design they concocted in their twenties never really occurred to them.

Take Loren Brichter, the designer who created pull-to-refresh (the downward abracadabra swipe that prompts new app content to load). Brichter was 24 when he accidentally popularized this ubiquitous 2D gambling gesture. Of course, analogies between pull-to-refresh and slot machines are only clear to him now — in retrospect, through the hindsight bestowed upon him by adulthood.

“Now 32, Brichter says he never intended the design to be addictive,” Paul Lewis reports in the Guardian‘s latest special technology feature. Yet even the tech whiz behind the curtain has since fallen prey to some of his old design tricks. “I have two kids now,” Brichter confesses, “and I regret every minute that I’m not paying attention to them because my smartphone has sucked me in.”

As if these compulsions weren’t hollow enough, push notification technology rendered pull-to-refresh obsolete years ago. Apps can update content automatically, so user nudges like swiping and pulling aren’t just addictive, they’re redundant. According to Brichter, pull-to-refresh “could easily retire,” but now it’s become like the Door Close button in elevators that close automatically: “People just like to push it.”

So they do — over and over and over and over. In cases of addiction, people “just like to” touch their phones more than 2,617 times a day. As the opportunity costs of all that frittered attention really start to add up, Brichter and his peers find themselves fundamentally questioning their legacies:

“I’ve spent many hours and weeks and months and years thinking about whether anything I’ve done has made a net positive impact on society or humanity at all,” [Brichter] says. He has blocked certain websites, turned off push notifications, restricted his use of the Telegram app to message only with his wife and two close friends, and tried to wean himself off Twitter. “I still waste time on it,” he confesses, “just reading stupid news I already know about.” He charges his phone in the kitchen, plugging it in at 7pm and not touching it until the next morning.

“Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”

Lewis spotlights several designers who’ve come to similar ethical crossroads in their 30s, many of whom have quit posts at household-name technological juggernauts in the hopes of designing our way out of all this squandering.

If the attention economy is just a euphemism for the advertising economy, these techno-ethicists ask, can we intelligently design our way back to safeguarding our actual intentions? Can we take back the time we’ve lost to touchscreen-enabled compulsions, and reallocate that time to bend it to our will again? Or have we forgotten that human will and democracy, as one of Lewis’ “refuseniks” reminds us, are one and the same?

James Williams does not believe talk of dystopia is far-fetched. The ex-Google strategist who built the metrics system for the company’s global search advertising business, he has had a front-row view of an industry he describes as the “largest, most standardised and most centralised form of attentional control in human history”.

Williams, 35, left Google last year, and is on the cusp of completing a PhD at Oxford University exploring the ethics of persuasive design. It is a journey that has led him to question whether democracy can survive the new technological age.

He says his epiphany came a few years ago, when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realisation: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?”

That discomfort was compounded during a moment at work, when he glanced at one of Google’s dashboards, a multicoloured display showing how much of people’s attention the company had commandeered for advertisers. “I realised: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he recalls.

If the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?

“The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

“Will we be able to recognise it, if and when it happens?” Williams replies. “And if we can’t, then how do we know it hasn’t happened already?”

Read the story

Fine for the Whole Family

Lynne Gilbert / Getty Images

Was I a picky eater as a child? Yes. But now my parents are pickier.

Selecting an appropriate restaurant for a visit from my folks has made for a decade-long challenge. In theory, I should have no shortage of options — New York City is fairly renowned for its culinary variety — but the city itself is short on a few of my parents’ preferences.

Over countless attempts and hundreds of plates, I’ve learned that the right atmosphere requires a delicate ambience of peace and quiet. (We don’t have that here.) There should be ample space. (We don’t have that, either.) Waitstaff should be more talented than necessary, with a cast-iron sense of humor that can withstand my dad’s idea of fun. (It’s the kind of fun that happens after we’ve left: he’ll rib a server with theatrical just-kidding complaints for two hours, then tip big.) It shouldn’t be crowded but it shouldn’t be empty. The bringer of cheese for the pasta should probably just leave the cheese. Dad won’t eat anything spicy. Mom won’t eat anything raw. Mom will always ask if the table is okay, which always sounds like the table isn’t okay, but when I ask her if she thinks the table is okay, she makes this face like, “Bail me out.”

Have we all become people who shouldn’t be taken anywhere? Probably. I’ve gotten used to my perennial failure to find places that thrive at this impossible nexus of enchantments. I doubt there is a food solution that will always make everyone in this particular triangle of our family totally happy. But for a while there, our solution was Olive Garden.

Olive Garden was our go-to when I was in college. There, everyone was happy — or if we weren’t, everyone was fine. My dad would order Shrimp Scampi; I would order Chicken Marsala; my mom would make their Famous House Salad more famous. We’d eat all the breadsticks, request our first refill, then wrap the second batch to go. I’d reheat them one at a time in my dorm room microwave, wrapping each in a paper towel that would soak up five finger-pressed blots of oil I wouldn’t have to clean. That was where I set the bar those days — that’s all it took to make for a singular restaurant experience with my family. Would there be leftovers? Great. Olive Garden was fine, and fine was good.

In “Dear Olive Garden, Never Change,” the latest installment in Eater‘s Death of Chains series on the slow decline of middlebrow chain restaurants, Helen Rosner reminds me that this anodyne fine-for-the-whole-family feel is completely by design. “One of the things I love about the Olive Garden,” Rosner writes, “is its nowhereness. I love that I can walk in the door of an Olive Garden in Michigan City, Indiana, and feel like I’m in the same room I enter when I step into an Olive Garden in Queens or Rhode Island or the middle of Los Angeles. There is only one Olive Garden, but it has a thousand doors.”

After three years at Vox Media as Eater‘s Features Editor turned Executive Editor turned Editor-at-Large, Rosner recently announced her departure from “the best goddamn food publication in the world.” She tweeted mysteriously to watch this space for updates, noting only that she is moving on “to crush some new things.” If they’re anything like her greatest hits thus far — on glorified vending machines, Tina Fey’s sheetcakingchicken tendersTrump’s ketchup-covered crime scenes, and takedowns of chocolatiers who may not always have had beards — her readers will be sure to bring their bottomless appetites to her next endeavor.

I feel an intense affinity for Olive Garden, which — like the lack of olives on its menu — is by design. The restaurant was built for affinity, constructed from the foundations to the faux-finished rafters to create a sense of connection, of vague familiarity, to bring to mind some half-lost memory of old-world simplicity and ease. Even if you’ve never been to the Olive Garden before, you’re supposed to feel like you have. You know the next song that’s going to play. You know how the chairs roll against the carpet. You know where the bathrooms are. Its product is nominally pasta and wine, but what Olive Garden is actually selling is Olive Garden, a room of comfort and familiarity, a place to return to over and over.

In that way, it’s just like any other chain restaurant. For any individual mid-range restaurant, return customers have always been an easy majority of the clientele, and chain-wide, it’s overwhelmingly the case: If you’ve been to one Olive Garden, odds are very high you’ve been to two or more. If the restaurant is doing it right, though, all the Olive Gardens of your life will blur together into one Olive Garden, one host stand, one bar, one catacomb of dining alcoves warmly decorated in Toscana-lite. Each Olive Garden is a little bit different, but their souls are all the same.

Read the story