Jacob Silverman | Longreads | June 2018 | 10 minutes (2,419 words)
In Tempe, Arizona, on the cool late-winter night of March 18, Elaine Herzberg, a 49-year-old homeless woman, stepped out onto Mill Avenue. A new moon hung in the sky, providing little illumination. Mill Avenue is a multi-lane road, and Herzberg was walking a bike across; plastic bags with some of her few possessions were dangling from the handlebars. Out of the darkness, an Uber-owned Volvo XC90 SUV, traveling northbound, approached at 39 miles per hour, and struck Herzberg. The Uber came to an unceremonious stop, an ambulance was called, and she died later in a hospital. The car had been in autonomous mode.
At least two Tesla drivers have died while behind the wheel of a car on autopilot, but Herzberg was the first pedestrian fatality of an autonomous vehicle, or AV. Her death is more than a grim historical fact. It is an unfortunate milestone in one of technology’s great utopian projects: the deployment of AVs throughout society. As an economic effort, it may be revolutionary—driver is one of the most common professions in the United States, and some of the most significant AV initiatives center on making taxis and freight trucks self-driving. As a safety measure, AVs promise to eliminate some 35,000 deaths each year, which are blamed on driver error. While computers are, of course, prone to make mistakes—and vulnerable to hacking—the driverless future, we are told, will feature far less danger than the auto landscape to which we’ve been accustomed. To get there, though, more people are going to die. “The reality is there will be mistakes along the way,” James Lentz, the CEO of Toyota North America, said at a public event after Herzberg was killed. “A hundred or 500 or a thousand people could lose their lives in accidents like we’ve seen in Arizona.” That week, the company announced that it would pause AV testing on public roads. Recently, when I asked if Toyota has calculated how many casualties it expects to cause in pursuit of AVs, a spokesperson replied that the company is focused on reducing the number of fatalities at the hands of human drivers: “Our goal in developing advanced automated technologies is to someday create a vehicle incapable of causing a crash.”
Implied in these remarks is the notion that deaths like Herzberg’s are the price of progress. Following the accident, Uber temporarily halted public AV testing, though it plans to resume in the coming months, just not in Arizona. (Volvo has kept mum, while several companies that contribute self-driving technology to Uber’s vehicles—Nvidia, Mobileye, Velodyne—have claimed that their features were deployed improperly, placing blame on Uber.) An Uber spokesperson told me that the company is cooperating with an investigation by the National Transportation Safety Board and examining its testing process: “We have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture.” But what does it mean to experiment with technologies that we know will kill people, even if the promised results would save lives?
Autonomous vehicles are seeing machines. With sensors, cameras, and radars harvesting petabytes of data, they try to read and make sense of their surroundings—to perceive lanes, trees, traffic lights. According to a preliminary NTSB report, the car that hit Herzberg initially registered her “as an unknown object, as a vehicle, and then as a bicycle.” With 1.3 seconds until impact, the car’s system decided to make “an emergency braking maneuver,” but Uber had disabled the Volvo’s emergency braking mechanism, hoping to avoid a herky-jerky ride. “The vehicle operator is relied on to intervene and take action,” the report notes, but “the system is not designed to alert the operator.” In the Volvo that night, there was a human backup driver behind the wheel, but she was looking down at a screen, relaying data, so she didn’t see Herzberg in time to take over.
Like the algorithms that power Google Search or Facebook’s newsfeed, AV decision-making mostly remains the stuff of proprietary trade secrets. But the NTSB investigation and other reportage help sketch a picture of why Uber’s systems failed that night in Tempe: After the accident, Reuters reported that Uber had decreased the number of LIDAR (laser-based radar) modules on its vehicles and removed a safety driver (previously, there had been two—one to sit behind the wheel and another to monitor data). These details aren’t necessarily inculpatory, but they seem to suggest an innovate-at-all-costs operation that puts little emphasis on safety.
To conduct testing, the AV industry has quietly—even covertly—spread from private tracks to public roads across the country. A patchwork of municipal, state, and federal regulations means that, while important research is underway in dozens of states, it can be difficult to glean where cars are going and what safety standards are in place. Aiming to lure innovators, Arizona, Texas, and Michigan have competed to provide the lightest regulatory touch. In 2016, after California established a comparatively robust set of AV regulations, Doug Ducey, the governor of Arizona, told Uber, “California may not want you, but we do.” Google’s Waymo and General Motors arrived, too. Ducey encouraged the start of an AV pilot program before the public was informed—Herzberg was probably unaware that these companies were testing in her community when she was hit—and weeks before the crash he issued an executive order explicitly allowing driverless vehicles on city streets. The order set rules making companies liable for criminally negligent fatalities, yet Sylvia Moir, Tempe’s police chief, said that the company would likely not be at fault in the accident; Ducey quickly banned Uber from testing AVs in his state, but hundreds of other self-driving cars are still on the road.
On the federal level, there has not been much scrutiny of how AVs operate. The AV Start Act, a bill that would encourage AV testing with minimal federal oversight, has been held up by Democratic senators who worry it doesn’t do enough to address safety concerns. The NTSB has established a scale to measure autonomous functions, ranging from 0 to 5 (with 5 being a car so self-directed that it doesn’t have a steering wheel). But most policy decisions have fallen to state legislatures and industry-friendly governors. Unlike pharmaceutical research, which has stringent standards, Silicon Valley’s products are commonly seen as inherently beneficial, and AV companies seem to have carte blanche to test their inventions.
If autonomous vehicles are a technological inevitability, how do we know when they’ve arrived?
This regulatory free-for-all has led to a host of questions from concerned citizens and transportation advocates. Matters of liability have not been standardized, which leaves open who should be responsible when an AV crashes—the owner, the manufacturer, the insurer? (Uber has already settled with some of Herzberg’s relatives.) What of the companies whose hardware and software are combined in an AV’s complicated systems? How does a robo-taxi ensure customer compliance, and how might it deal with someone who doesn’t pay or refuses to get out of a car? Should police have the ability to take control of AVs, forcing them to pull over? More broadly, if autonomous vehicles are a technological inevitability, how do we know when they’ve arrived?
Without clear rules—or sufficient data—it may be up to the market to decide on AV standards. Manufacturers like Mercedes are testing levels of autonomy, offering more computer-assisted cruise control and parking features, for instance, without asking drivers to surrender their full attention. In models reliant on a computer’s discretion, however, customers may soon be able to choose between a utilitarian vehicle that will maximize good for all, or a “self-protective” one that will preserve the passengers’ safety at all costs. Which can you afford? Someone’s life may depend on the answer.
That Uber’s AV didn’t see Herzberg, a homeless woman, as a human being makes a kind of perverse sense, since AVs—especially robo-taxis—weren’t made for people like her. Neither were the sprawling cities like Tempe where these cars are being tested. Besides their inviting regulatory environments, these areas were chosen because of their open road systems, good weather, and few cyclists and pedestrians. At a time when urbanists are preaching multi-modal mobility, from bikes to buses, AVs are a kind of throwback, making streets less accommodating to anyone on foot; by increasing the number of cars—particularly passenger-free delivery vehicles—they tend to worsen traffic and pollution. And even if the auto industry could develop an impossibly perfect algorithm for safety, widespread AV adoption would require massive road infrastructure upgrades to fix lane lines and embed communication beacons; faulty GPS systems, outdated maps, surveillance and privacy challenges, cybersecurity flaws, bandwidth limits, and expensive hardware would all be vexing.
Despite all this, Thomas Bamonte, who works as a senior program manager for AVs at the North Central Texas council of governments, is optimistic. When we spoke, he told me about Drive.ai, a company that recently announced it would launch a pilot fleet of AVs in the city of Frisco. Drive.ai’s service doesn’t much resemble an Uber robo-taxi, he explained: The vehicles are Nissan vans, limited to roaming around a small commercial district during daylight hours; to make them stand out, they have been painted bright orange with a wavy blue stripe bearing the words “Self-Driving Vehicle.” The vans are also equipped with screens that signal when passengers are boarding and when it’s safe to walk past. And they will, at least at first, feature human safety drivers.
The Drive.ai program lacks the ambition of, say, Waymo’s Phoenix-area AV service, which ferries passengers around without a human driver ready to take the wheel, but the project—publicly announced, small in scope, conducted in partnership with city officials—seems to take a more measured approach to AV testing than exists elsewhere. Bamonte described Frisco’s AV program as “kind of crawl, walk, run.”
“We don’t want developers to just plop down unannounced and start doing a service,” Bamonte told me. He compared the Drive.ai testing favorably to Tesla’s, whose roadster has ambitious autopilot features that have already been deployed in thousands of commercial vehicles, wherever drivers take them. So far, Tesla’s autopilot mode has caused several high-profile crashes on public roads, including fatal accidents in Florida and California. The Uber crash has “added note of caution,” Bamonte said, but “it’s our responsibility to continue to explore and test this technology in a responsible way.” For him, that means closed tracks and computer simulations; after conducting a public education campaign and soliciting feedback, deployment on public streets will inevitably follow. To learn if these cars can work for us, we have to put them in real-world conditions, he explained. “You just can’t do that in a laboratory.”
Central to AV testing is the “trolley question,” based on a scenario in which a runaway trolley threatens to fatally run over a crowd—unless someone can pull a lever, redirecting the trolley onto another track, where a single person is standing. No matter what happens, this thought experiment proffers, someone is going to die. It’s up to us to choose. With AV testing, that decision ostensibly lies within software. At stake is whether cars can be adequately programmed to select the lesser of two evils: swerving to avoid a crowd of pedestrians if it means killing one pedestrian or the vehicle’s passenger. In a 2016 paper, “The Social Dilemma of Autonomous Vehicles,” three scientists examined public trust in that decision making-process. In a survey of about 2,000 people, most respondents liked the idea of an AV sacrificing itself to save others; but as passengers, they said they would want the car to preserve their own safety no matter what. “To align moral algorithms with human values,” the researchers advised, “we must start a collective discussion about the ethics of AVs—that is, the moral algorithms that we are willing to accept as citizens and to be subjected to as car owners.”
The study’s authors worried that mandating utilitarian AVs—those that would swerve to avoid a crowd—through federal regulation would present a confounding problem: passengers would never agree to be rationally self-sacrificing. “Regulation could substantially delay the adoption of AVs,” they wrote, “which means that the lives saved by making millions of AVs utilitarian may be outnumbered by the deaths caused by delaying.” Things get even more complicated in what are called edge cases, in which an AV may face a variety of thorny weather, traffic, and other conditions at once, forcing a series of complex rapid-fire decisions. The report concludes, “There seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest.”
Programming morality into our vehicles is a matter of deeper, almost mystical complexity.
Azim Shariff—one of the paper’s authors and a professor at the University of California, Irvine—has called for “a new social contract” for AVs. Riding in one will mean giving yourself over to a machine whose “mind” humans don’t understand—and which, in a moment of crisis, may be programmed to prioritize the lives of others over your own. “I’ve kind of wracked my brain to think of another consumer product which would purposefully put their owners at risk against their wishes,” he told me. “It’s really a radically new situation.”
In practice, Shariff went on, cars are unlikely to be faced by stark choices. The trolley question is meant to emblematize tough decision-making for the purpose of moral deliberation; programming morality into our vehicles is a matter of deeper, almost mystical complexity. “The cars are going to have to be choosing in the maneuvers that they make to slightly increase the risk toward a pedestrian rather than the passenger, or slightly increase the risk toward somebody who’s walking illegally versus someone who’s walking legally,” he said. That’s a fraction of a percent here or there. “Only at the aggregate level, with all the cars driving all the miles, will you then see the statistical version of these scenarios emerge.”
It will take billions of miles—and some unknown number of people killed—to gauge whether, by a statistically significant margin, AVs are safer than human-driven cars. For now, there is mostly speculation and experimentation. The death of Elaine Herzberg is “a new data point,” according to Jensen Huang, the CEO of Nvidia, which makes chips for self-driving systems. “We don’t know that we would do anything different, but we should give ourselves time to see if we can learn from that incident,” he said. “It won’t take long.”