Imagine an autonomous vehicle driving down a two-lane highway. Suddenly, its brakes fail. If it keeps moving forward, it will hit two men crossing the road. If it swerves out of the lane, it will hit a few dogs. Whom should the vehicle choose to save?
Imagine another self-driving vehicle, this one carrying a man, a woman, a child and a dog. Ahead, a pregnant woman, an elderly woman, a robber, a girl and a poor person are crossing the road. Brakes having failed, if the vehicle chooses to swerve, it will crash into a barricade, which will likely kill its passengers. What choice should it make?
These two are among 13 scenarios presented as part of The Moral Machine Experiment, an online “game” that engaged 2.3 million people with two difficult choices — an adaptation of a well-known “trolley problem” that has been long been discussed among ethicists and psychologists.
The study, published in Nature, threw up a mix of unexpected and expected results. On an average, respondents of all countries chose human life over animals, and preferred to spare the larger number of lives, or to spare the young. On other aspects, there was stark disagreement among respondents of various countries. This suggested that a “universal moral code” would be difficult.
“The Moral Machine is designed to explore the moral dilemmas faced by autonomous vehicles,” wrote the authors, an international team from Massachusetts Institute of Technology, Harvard University, University of British Columbia, and Toulouse School of Economics.
“One of the advantages of self-driving cars is that they will be able to react much faster than us and without the biases and instincts that might prevent us from doing what we believe to be right, were we driving,” Dr Azim Shariff, associate professor, University of British Columbia and an author of the paper, wrote to The Indian Express. “They [people] have the luxury of deliberation, and thus the responsibility of deliberation. With the self-driving cars we can program them to operate in a more ethical way than we would expect human beings — with all their psychological limitations — to do.”
The game’s 13 scenarios require users to choose between action and inaction, between saving more and fewer, and between passengers and pedestrians. “The scenarios we used in the game are idealisations, or abstractions, designed to make the ethical trade-offs easily understandable. To better understand people’s biases, we included dimensions where we predicted these biases might emerge: age, gender, weight, and social status. Finally, we included pets mainly as a frivolous way of making the game more fun. There was a bit of a trade-off here where we wanted the game to provide us with serious and useful data, but to get people to do the task in large numbers, we needed to make the game appealing,” Shariff said.
Indian participants appeared largely inclined towards saving the elderly and women, rather than pedestrians who follow rules or those of higher status. While most users from the subcontinent showed similar ethical inclinations, Bangladesh’s were more similar to those of western countries. Also, Indians’ choices was most similar to those of users from Sweden; for users from Pakistan, the closest match was with Indian participants. “Different cultural factors appear to affect people’s ethical commitments, so as the factors change we could see the ethics change,” Shariff said.
Until now, only Germany has drawn up guidelines for ethical choices to be made by self-driving vehicles. While these do put human lives in front of animals, they are silent on a majority of other situations such as sparing younger lives or more lives.
Even though a driverless car may seem some distance away in India, the authors hope that their experiment kick-starts a conversation. “The technology is advancing rapidly and there are fully autonomous cars on the streets of the United States right now (albeit still in a trial capacity)… So, yes, it’s likely these cars will be in California and China before they are India,” said Shariff.
The study has got a mixed response. “Within academia, there has been excitement among psychologists about the breadth of the results; by reaching such a large number of people in so many countries, the study provides valuable information about the cultural differences and universality of moral judgments. That said, some philosophers and other scholars get irritated by the use of analogues to the classic trolley problem,” said Shariff.
“To be frank, I think they are mistaken here, and are taking the scenarios we have used literally but not seriously, whereas they should be taken seriously but not literally.”