Trends |

Meredith Broussard | The trolley problem12.04.2019

photo by: Wikimedia

We should really focus on making human-assistance systems instead of on making human-replacement systems. The point is not to make a world run by machines; people are the point. We need human-centered design.

“Someday” is the most common way to talk about autonomous vehicles. Not if, but when. This seems strange to me. (...)The self-driving car doesn’t really work. Or, it works well enough in easy driving situations: a clear day, an empty highway with recently painted lines. (...) If you set up the conditions just right, it looks like it works. However, the technical drawbacks are abundant. Continuous autonomous driving requires two onboard servers—one for operation, one for backup—and together the servers generate about five thousand watts. That wattage generates a lot of heat. It’s the wattage you’d require to heat a four-hundred-square-foot room. No one has yet figured out how to also incorporate the cooling required to counter this. (1)

Ryan (the salesperson of Tesla – ed.) directed me onto the West Side Highway and into traffic. Ordinarily, I take my foot off the accelerator and drift up to a stop light, braking before I get there—but the Tesla has regenerative braking, which meant that the brakes kicked in when I took my foot off the gas pedal. It felt disorienting, the need to drive differently. Someone honked at me. I couldn’t tell if he honked because I was being weird about the traffic light, if he was giving me a hard time for being in a luxury car, or if he was just an ordinary NYC jerk.

I drove down the highway and turned onto cobblestone-paved Clarkson Street. It felt less bumpy than usual. Ryan directed me just past Houston onto a block-long stretch of smooth road without any driveways and with few pedestrians that stretches along the back of a shipping facility. “Open it up,” Ryan urged me. “There’s nobody around. Try it.”

I didn’t need to be told twice. I pressed the pedal to the metal—I had always wanted to do that—and the car surged ahead. The power was intoxicating. We were all thrust back against the seats with the force of acceleration. “Just like Space Mountain!” said Ryan. My son, in the back seat, agreed. I regretted that the block was so short. We turned back onto the West Side Highway and I hit the accelerator again, just to feel the surge. Everyone was thrown back against the seats again. “Sorry,” I said. “I love this.” Ryan nodded reassuringly. “You’re a very good driver,” he told me. I beamed. I realized he probably says this to everyone, but I didn’t care. My husband, I noticed in the rearview mirror, looked a little green.

“This is the safest car on the market,” Ryan said. “The safest car ever made.” He told a story about the NHTSA’s crash testing on the Tesla: it couldn’t crash it. “They tried to flip it—and they couldn’t. They had to get a forklift to flip it over. We did the crash test, where the car drives into a wall—we broke the wall. They dropped a weight on the car, we broke the weight. We’ve broken more pieces of test equipment than any car ever.”

We passed another Tesla in Greenwich Village, and we waved. This is a thing that Tesla owners do: they wave to each other. Drive a Tesla on the highway in San Francisco, and your arm gets tired from waving. Ryan kept referring to Elon Musk. A cult of personality surrounds Musk, unlike any other car designer. Who designed the Ford Explorer? I have no idea. But Elon Musk, even my son knew. “He’s famous,” my son said. “He was even a guest star on the Simpsons.”

The ethical dilemma is generally led by the trolley problem, a philosophical exercise. Imagine you’re driving a trolley that’s hurtling down the tracks toward a crowd of people. 

We parked and took a picture of my son and me standing next to the bright white car, its wings up. We got into our family car parked outside. “This feels so old-fashioned now,” my son said. We drove home down the West Side Highway, then over the cobblestones of Clarkson Street. We jolted and bobbled over the stones. It was the exact opposite of the smooth ride we felt in the Tesla. My car felt like it was shaking me at a low level. It was like the time I went to Le Bernadin for lunch, then came home and realized the only thing we had for dinner was hot dogs.

As a car, the Tesla is amazing. As an autonomous vehicle, I am skeptical. Part of the problem is that the machine ethics haven’t been finalized because they are very difficult to articulate. The ethical dilemma is generally led by the trolley problem, a philosophical exercise. Imagine you’re driving a trolley that’s hurtling down the tracks toward a crowd of people. You can divert it to a different track, but you will hit one person. Which do you choose: certain death for one, or for many? Philosophers have been hired by Google and Uber to work out the ethical issues and embed them in the software. It hasn’t worked well. In October 2016, Fast Company reported that Mercedes programmed its cars to always save the driver and the car’s occupants.(2) This is not ideal. Imagine an autonomous Mercedes is skidding toward a crowd of kids standing at a school bus stop next to a tree. The Mercedes’s software will choose to hit the crowd of children instead of the tree because this is the strategy that is most likely to ensure the safety of the driver—whereas a person would likely steer into the tree, because young lives are precious.

Imagine the opposite scenario: the car is programmed to sacrifice the driver and the occupants at the expense of bystanders. Would you get into that car with your child? Would you let anyone in your family ride in it? Do you want to be on the road, or on the sidewalk, or on a bicycle, next to cars that have no drivers and have unreliable software that is designed to kill you or the driver? Do you trust the unknown programmers who are making these decisions on your behalf? In a self-driving car, death is a feature, not a bug.

Who does this technology serve? How does it serve us to use it? If self-driving cars are programmed to save the driver over a group of kindergarteners, why?

The trolley problem is a classic teaching example of computer ethics. Many engineers respond to this dilemma in an unsatisfying way. “If you know you can save at least one person, at least save that one. Save the one in the car,” said Christoph von Hugo, Mercedes’s manager of driverless car safety, in an interview with Car and Driver.(3) Computer scientists and engineers (...) don’t tend to think through the precedent that they’re establishing or the implications of small design decisions. They ought to, but they often don’t. Engineers, software developers, and computer scientists have minimal ethical training. The Association for Computing Machinery (ACM), the most powerful professional association in computing, does have an ethical code. In 2016, it was revised for the first time since 1992. The web, remember, launched in 1991 and Facebook launched in 2004.

There’s an ethics requirement in the recommended standard computer science curriculum, but it isn’t enforced. Few universities have a course in computer or engineering ethics on the books. Ethics and morality are beyond the scope of our current discussion, but suffice it to say that this isn’t new territory. Moral considerations and concepts like the social contract are what we use when we get to the outer limits of what we know to be true or what we know how to deal with based on precedent. We imagine our way into a decision that fits with the collective framework of the society in which we live. Those frameworks may be shaped by religious communities or by physical communities. When people don’t have a framework or a sense of commitment to others, however, they tend to make decisions that seem aberrant. In the case of self-driving cars, there’s no way to make sure that the decisions made by individual technologists in corporate office buildings will match with actual collective good. This leads us to ask, again: Who does this technology serve? How does it serve us to use it? If self-driving cars are programmed to save the driver over a group of kindergarteners, why? What does it mean to accept that programming default and get behind the wheel?

Plenty of people, including technologists, are sounding warnings about self-driving cars and how they attempt to tackle very hard problems that haven’t yet been solved. Internet pioneer Jaron Lanier warned of the economic consequences in an interview:

The way self-driving cars work is big data. It’s not some brilliant artificial brain that knows how to drive a car. It’s that the streets are digitized in great detail. So where does the data come from? To a degree, from automated cameras. But no matter where it comes from, at the bottom of the chain there will be someone operating it. It’s not really automated. Whoever that is—maybe somebody wearing Google Glass on their head that sees a new pothole, or somebody on their bike that sees it—only a few people will pick up that data. At that point, when the data becomes rarified, the value should go up. The updating of the input that is needed is more valuable, per bit, than we imagine it would be today.(4)

All self-driving car “experiments” have required a driver and an engineer to be onboard at all times. Only a technochauvinist would call this success and not failure.

Lanier is describing a world in which vehicle safety could depend on monetized data—a dystopia in which the best data goes to the people who can afford to pay the most for it. He’s warning of a likely future path for self-driving cars that is neither safe nor ethical nor toward the greater good. The problem seems to be that few people are listening. “Self-driving cars are nifty and coming soon” seems to be the accepted wisdom, and nobody seems to care that the technologists have been saying “coming soon” for decades now. To date, all self-driving car “experiments” have required a driver and an engineer to be onboard at all times. Only a technochauvinist would call this success and not failure.

A few useful consumer advances have come out of self-driving car projects. My car has cameras embedded in all four sides; the live video from these cameras makes it easier to park. Some luxury cars now have a parallel-parking feature to help the driver get into a tight space. Some cars have a lane-monitoring feature that sounds an alert when the driver strays too close to the lane markings. I know some anxious drivers who really value this feature.

Safety features rarely sell cars, however. New features, like onboard DVD players and in-car Wi-Fi and integrated Bluetooth, are far more helpful in increasing automakers’ profits. This is not necessarily toward the greater good, however. Safety statistics show that more technology inside cars is not necessarily better for driving. The National Safety Council, a watchdog group, reports that 53 percent of drivers believe that if manufacturers put infotainment dashboards and hands-free technology inside cars, these features must be safe to use. In reality, the opposite is true. The more infotainment technology goes into cars, the more accidents there are. Distracted driving is up since people started texting on mobile phones while driving. More than three thousand people per year die on US roads in distracted driving accidents. The National Safety Council estimates that it takes an average of twenty-seven seconds for the driver’s full mental attention to return after checking a phone. Texting while driving is banned in forty-six states, the District of Columbia, Puerto Rico, Guam, and the US Virgin Islands. Nevertheless, drivers persist in using phones to talk or text or find directions while driving. Young people are particularly at fault. Between 2006 and 2015, the number of drivers aged sixteen to twenty-four who were visibly manipulating handheld devices went up from 0.5 percent to 4.9 percent, according to the NHTSA. (5)

Building self-driving cars to solve safety problems is like deploying nano-bots to kill bugs on houseplants. We should really focus on making human-assistance systems instead of on making human-replacement systems. The point is not to make a world run by machines; people are the point. We need human-centered design. One example of human-centered design might be for car manufacturers to put into their standard onboard package a device that blocks the driver’s cell phone. This technology already exists. It’s customizable so that the driver can call 911 if need be but otherwise can’t call or text or go online. This would cut down on distracted driving significantly. However, it would not lead to an economic payday. The hope of a big payout is behind a great deal of the hype behind self-driving cars. Few investors are willing to give up this hope.

The economics of self-driving cars may come down to public perception. In a 2016 conversation between President Barack Obama and MIT Media Lab director Joi Ito, which was published in Wired, the two men talked about the future of autonomous vehicles.(6) “The technology is essentially here,” Obama said.

We have machines that can make a bunch of quick decisions that could drastically reduce traffic fatalities, drastically improve the efficiency of our transportation grid, and help solve things like carbon emissions that are causing the warming of the planet. But Joi made a very elegant point, which is, what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?

Ito replied: “When we did the car trolley problem, we found that most people liked the idea that the driver and the passengers could be sacrificed to save many people. They also said they would never buy a self-driving car.” It should surprise no one that members of the public are both more ethical and more intelligent than the machines we are being encouraged to entrust our lives to. 

Excerpted and adapted from Artificial Unintelligence: How Computers Misunderstand the World by Meredith Broussard (The MIT Press, 2018).

  1. Danah Boyd and Kate Crawford. “Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon.” Information Communication and Society 15, no. 5 (June 2012): 662–679. doi:10.1080/1369118X.2012.678878.
  2. Danah Boyd, Emily F. Keller, and Bonnie Tijerina. “Supporting Ethical Data Research: An Exploratory Study of Emerging Issues in Big Data and Technical Research.” Data & Society Research Institute, August 4, 2016.
  3. Brand, Stewart. The Media Lab: Inventing the Future at MIT. New York: Viking, 1987
  4. www.wholeearth.com/issue/1340/article/189/we.are.as.gods
  5. www.content.time.com/time/magazine/article/0,9171,982602,00.html
  6. Brewster, Zachary W., and Michael Lynn. “Black-White Earnings Gap among Restaurant Servers: A Replication, Extension, and Exploration of Consumer Racial Discrimination in Tipping.” Sociological Inquiry 84, no. 4 (November 2014): 545–569.
000 Reactions
/ @merbroussard

Data journalist and an assistant professor at the Arthur L. Carter Journalism Institute of New York University. Her academic research focuses on artificial intelligence in investigative reporting, with a particular interest in using data analysis for social good. Her work has been supported by the Institute of Museum & Library Services as well as the Tow Center at Columbia Journalism School. Her features and essays have appeared in The Atlantic, Slate, and other outlets.

see also

discover playlists