All Commentary
Saturday, March 9, 2019

The Trolley Problem and Self-Driving Cars

Lives may certainly be saved, but property damage, animal life, and utility may suffer declines in other areas.

Photo by Eugene Triguba on Unsplash

First posed in the early 20th century, the Trolley Problem is one of the most widely recognized questions in ethics and moral philosophy. The problem asks what a bystander should do upon seeing an unmanned trolley hurtling down a set of tracks toward five people who are tied up. There is a lever the bystander can pull to divert the trolley onto an alternate track, but it would then hit one person who is tied up on that track. Effectively, the bystander must immediately determine whether he would rather passively watch five people die or actively intervene and cause the death of one person to save five others.

In its original form, the Trolley Problem raises two main questions: (1) how the bystander would react on an impulsive emotional level and (2) how he should react on a morally rationalized level. As an intellectual exercise, instant impulse is ignored, and the focus is on rationalizing and justifying the answer within the strict divaricate hypothetical.

Possible Moral Frameworks

The problem is set up for utilitarianism—in short, saving the greatest number possible. But there are other moral frameworks to consider. It boils down to competing ethical concepts of action and consequence. On the one hand, the person pulling the lever must make a decision about the ethics of being involved: The most relevant form of the Trolley Problem today is the self-driving car. choose to intervene and kill one or sit back and watch five die and bear the moral weight of his own decision. The alternative is purely consequentialist—that pulling the lever saves more people and is almost certainly the right answer. The two tracks and one lever severely limit the available options and outcomes, which are all tragedies. But in the real world, vast complexity creates the potential for different outcomes.

Though similar ethical dilemmas may be interesting thought experiments, the most relevant form of the Trolley Problem today is the self-driving car. Some view this as a bad approach to the topic, but with proper framing, there are still lessons to draw.

Unlike the snap decision required of the bystander in a real-life trolley problem, the team of designers, programmers, ethicists, and engineers designing the self-driving car have the luxury of deciding what calculations are optimal in each potential situation long before the car is manufactured. In this way, the programming team is like the bystander in the Trolley Problem, as they are effectively predetermining when to “pull the lever” by coding it into the vehicle’s algorithms.

For these individuals, the moral dilemma must take center stage. Whether disguised as probabilities or coded as dollar-value outcomes, the underlying question is the same: who should the vehicle save when faced with a sudden dynamic situation?

The Car as the Trolley

Once the car is on the road in self-driving mode, the lever pulls are predetermined by a programmer who functions as a bystander looking into hypothetical futures, while the driverless car becomes the trolley. There is still a human effectively driving the car, but has long ago finished coding all the responses into the vehicle’s brain. Furthermore, while the original Trolley Problem only has two track-bound outcomes, autonomous vehicles open up a nearly infinite set of outcomes in the dynamic setting of an open road with unpredictable factors.

There are no tracks, no one is tied down, there are thousands of other “trolleys” on the road, and the potential victims include the passengers of the car. Fortunately, this increased range of outcomes includes many in which no one dies. In these situations, the design team must consider how best to weigh non-lethal outcomes. Additionally, the exponential complexity is accompanied by time to think, reflect, program, and experiment before the vehicles set out.

A complication for the autonomous vehicle is that the programmer as the lever puller both does and doesn’t pull the lever either way because when the vehicle confronts would-be victims, there are no natural track-bound outcomes and lever-pull alternatives. Rather, every outcome is programmed as a lever, including slamming the breaks and not hitting anyone. It is important to consider that there is still a human effectively pulling a lever, or virtually driving the car, but that person has long ago finished her job: coding all the responses into the vehicle’s brain.

Self-driving cars are still controlled by humans, but they are removed several steps in the causal chain by preprogramming “levers” ahead of time.

Rather than imagining a car by itself, it is the programmer who might design the car to destroy itself if necessary to avoid a child in the road, saving the child and, with the right safety features, the passengers—but totaling the vehicle. Other sacrifices could be in utility; a speed governor could limit potential fatalities and damage but would reduce the performance and thrill of greater speed. Perhaps animals will always be perceived as low-value compared to humans, meaning that dogs in the road will routinely be hit in order to prevent accidents caused by swerving to avoid them.

There will be many effects, regardless of how the vehicles are programmed. Lives may certainly be saved, but property damage, animal life, and utility may suffer declines in other areas. These are all traceable back to the programmer, who pulls virtual levers by thinking about what may happen and equipping the algorithms to react to dynamic settings.

Does The Nature of Death Change?

In the case of the self-driving car, the programmer must weigh different scenarios and determine that certain deaths are justifiable, even possibly programming the vehicle to actively cause one terrible outcome in order to minimize the total loss. Of course, as noted, not every scenario requires a tragic outcome. Deciding where the line is and programming the value of life into a computer is a monumental task. Surely, there is no right answer, but there are still ethical questions to consider.

While no death resulting from an autonomous vehicle would be a murder, the nature of the death changes. Whereas a reflex-reaction accident is entirely void of premeditation and forethought, a programmer is in some sense thinking ahead and deciding on values and algorithmic decisions. Even if autonomous vehicles reduce deaths on the road, fewer deaths are almost signed off ahead of time rather than in-the-moment accidents.

Each time a prototype self-driving vehicle causes death, injury, or destruction, the event is covered extensively in the media. But these events are vanishingly rare compared to the number of tragedies wrought by human drivers. We must determine some standard of value for human life and program it into our vehicles in anticipation of future tragedies. So would we favor the comparative safety of self-driving cars if it meant fewer deaths? Are we uncomfortable with the idea of the “lever” being “pulled” by a programmer creating an algorithm, especially when far more people are likely to die in road accidents caused by imperfect human decisions made in the blink of an eye?

A move toward autonomous vehicles means we must determine some standard of value for human life and program it into our vehicles in anticipation of future tragedies. These vehicles may guarantee a far lower rate of tragedies than our current human-piloted vehicles, but when drivers give up control, a slight moral shift occurs because the resulting deaths—though fewer—are no longer the result of human error and accident but the result of predetermined algorithms.

This is, of course, a new Trolley Problem, one burst open with complexity and more alternate outcomes. But there are still ethical questions hanging around. Self-driving cars are still controlled in some sense by humans, but they are removed several steps in the causal chain by preprogramming “levers” ahead of time.

Our current roadways demonstrate the instantaneous reaction model in the Trolley Problem, where drivers really do react as the scene unfolds. The shift to autonomous vehicles will be more akin to the academic use of the Trolley Problem of taking time to rationalize and optimize the response. Before our roads are full of autonomous vehicles, we must first accept that as a society, we would prefer fewer road deaths, even if it means the remaining statistic is in some way attributable to humans pulling levers behind the scenes.

  • Benjamin thinks, writes, and talks about economics, law, and public policy. His articles are intended to present issues in a new light to readers and do not necessarily reflect personal opinion. No articles represent the views of past or present employers.