For driverless cars, a moral dilemma: Who lives or dies?

For driverless cars, a moral dilemma: Who lives or dies?
In this Tuesday, Jan. 10, 2017, photo, an autonomous vehicle is driven by an engineer on a street through an industrial park, in Boston. Researchers at Massachusetts Institute of Technology are asking human drivers how they'd handle life-or-death decisions in hopes of creating better algorithms to guide autonomous vehicles. (AP Photo/Steven Senne)

What if the brakes go out in a driverless car? Does it mow down a crowd of pedestrians or swerve into a concrete wall and sacrifice its passenger?


Researchers at the Massachusetts Institute of Technology are asking humans around the world how they think a robot car should handle life-or-death decisions.

They're finding that many people want self-driving cars to act in the greater good, preserving as much life as possible. But a car programmed to act in the greater good at its passengers' expense is not one they'd like to buy.

The researchers' goal is not just to inspire better algorithms and ethical tenets to guide autonomous vehicles, but to understand what it will take for society to accept these vehicles and use them.

Explore further: Driverless cars: Who gets protected? Study shows public deploys inconsistent ethics on safety issue