How does Google’s self-driving car deal with the trolley problem in philosophy?
The trolley problem is a thought experiment in ethics. The general form of the problem is this: There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:
• Do nothing, and the trolley kills the five people on the main track.
• Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the most ethical choice?
Source: Trolley problem — Wikipedia
The trolley problem with respect to driver-less cars is a hot topic for debate and discussions. In a situation where the car has to choose between a pedestrian and the passenger (like the trolley problem), whom should the programmers program the car to choose. How do the developers, designers, and programmers deal with this moral dilemma?
The trolley problem is pretty straightforward. The trolley problem has been the subject of many surveys in which approximately 90% of respondents have chosen to kill the one and save the five. I find a different version of the trolley problem, the fat man problem, more interesting and relatable to driver-less cars.
As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you — your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?
Source: Trolley problem — Wikipedia
There are a few problems with this problem:
Picture: Moral Machine
- The trolley problem deals with two fixed outcomes. In the case of driverless cars, there is a third outcome as well- to slam the brakes; or even a fourth outcome- to divert the vehicle to an alternate route away from the road. In order to relate the trolley problem to the decision making of driverless cars, we need very strict conditions.
Chris Urmson, who heads up Google’s self-driving car project says in an interview with the Washington Post:
“It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes’,” he added. “You’re much more confident about things directly in front of you, just because of how the system works, but also your control is much more precise by slamming on the brakes than trying to swerve into anything. So it would need to be a pretty extreme situation before that becomes anything other than the correct answer.”
- Google cars have run over 1 million miles in the automated mode, and reportedly, there has been no serious accident so far. So there is actually a very slim chance of the cars requiring to make that moral choice.
Urmson added that the system is engineered to work hardest to avoid vulnerable road users (think pedestrians and cyclists), then other vehicles on the road, and lastly avoid things that don’t move.
- The actual trolley problem is an ideal case scenario where the probability of events is either 1 or 0. If you don’t act, there is a 100% possibility of the trolley killing the 5 people. And if you push the fat man on the track, there is a 0% chance of trolley killing the 5 people. This doesn’t happen in real life. If a driver-less car has to choose between the passenger and the pedestrian, it has to consider a multitude of factors. What is the probability of survival of the passenger if the car avoids the pedestrian? What is the probability of survival of the pedestrian if the car doesn’t avoid her/him. Who should the car choose to kill- an old man or a child, or two old women or a fit young doctor, or anything similar? There are a lot of complex scenarios depending on the circumstances which, I think, are too complex for a machine to solve. Humans, under the same circumstances, tend to be decisively poorer than machines. If there is no solution to a problem, it is not much use pondering over that problem.
Urmson stressed that Google’s cars don’t know what person might be walking on a sidewalk or ambling in a crosswalk. The car won’t be able to decide which pedestrian makes the most sense to strike in the event of an unavoidable collision.
“It’s not possible to make a moral judgement of the worth of one individual person versus another — convict versus nun,” he said. “When we think about the problem, we try to cast it in a frame that we can actually do something with.”
Andrew Chatham, a principal engineer on the project at Google said in an interview with Guardian:
“The main thing to keep in mind is that we have yet to encounter one of these problems,” he said. “In all of our journeys, we have never been in a situation where you have to pick between the baby stroller or the grandmother. Even if we did see a scenario like that, usually that would mean you made a mistake a couple of seconds earlier. And so as a moral software engineer coming into work in the office, if I want to save lives, my goal is to prevent us from getting in that situation, because that implies that we screwed up.”
- Even if the cars are in a situation to make a decision, and even if they have the ability to do so, and they actually make the decision, does it take a morally right decision? The trolley problem (or the variants of it) is still debatable. How can we program a machine to take a moral decision when there is not an existing moral consensus amongst humans?
Even if a self-driving car did come up against a never-before-seen situation where it did have to pick between two accidental death scenarios, and even if the brakes failed, and even if it could think fast enough for the moral option to be a factor, there remains no real agreement over what it should do even in idealised circumstances. — Andrew Chatham
The trolley problem with respect to driver-less cars is an interesting thing to think and talk about, but I don’t think it has any significant practical application. There’s actually an interesting MIT project called Moral Machine where you can browse, judge, and even design trolley problem circumstances and discuss them.
Full interviews: Self-driving cars don’t care about your moral dilemmas
Google’s chief of self-driving cars downplays ‘the trolley problem’