Moral Machines

Autonomous cars and their choices.

As machines take on more tasks formerly done by humans, these machines may soon have to make choices that are moral in nature. We all (mostly) appreciate the safety features added to newly designed cars to make our driving safer and easier, such as rear-view cameras and adaptive cruse-control. It has been predicted that sometime in the future cars will drive for us: we will get in our Ford and tell it where we want to go. The debate is not about if this will happen but when – as soon as the next decade or is it 20 (or more) years away?

When cars become responsible for all or a large part of our driving, the vehicle designers (and possibly operators) will have to decide what they will do in emergency situations. At some point the self-driving car will be faced with a choice: hit a pedestrian or crash into a wall causing the passengers to be injured? It may not happen often, and any real situation may be cloaked in shades of gray (how injured will the pedestrian or passenger be?), but the machine will have to make what we consider to be a moral decision. What instructions must we program into the machine to guide it in making this decision?


This past November’s issue of the journal Nature described a very large experiment designed to explore what moral choices people want self-driving cars to make. The experiment involved a computer app called the Moral Machine. People signed up to do the experiment and were faced with a forced choice in a moral situation that in this simulation had life-and-death consequences.

For instance, if a man was jaywalking, should the car with two old passengers in it crash, killing the passengers to save the jaywalker’s life? Forty million decisions by millions of people in 233 countries were collected. The variables explored were the number of people killed; the age of the victims; if they were law abiding; their gender; their social status; their fitness level; if pets should be spared; passengers vs pedestrians; and whether the car should act at all. Participants could also provide general demographic information, and their GPS location was recorded so that differences across areas and cultures could be assessed.

Three overarching conclusions were found: people wanted to save people rather than pets; the larger number of people should be saved; and youth should be saved in preference to older individuals. Beyond these conclusions, other factors were not nearly as strong; lawfulness and status were next. There were cultural clusters; for example, the victims’ age appears less important in eastern cultures and much higher in the southern cluster. 

Despite its size, the study is limited in that only people with computer access could participate, and all decisions required a black-and-white yes/no response. The authors go on to argue that any programing for autonomous cars will need to be built around people’s sense of what moral decisions are the right ones rather than some absolute ethical principles. For example, in Germany one of the suggested ethical codes argues that the individual’s characteristics should not be considered. This is inconsistent with the widespread perception that the age of the potential victim matters – children and babies should be spared first. 

In Scripture we are told that love is the fulfillment of the law: love for God and love for our neighbours. The moral decisions that autonomous cars may need to make are not prescribed directly in Scripture for us to follow. But the general rules suggested by this research are not inconsistent with what God asks us to do. Decisions on how cars are to be guided in emergencies need to be well defined and understood before these smart vehicles occupy our roads. 


Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *