Long before a car came anywhere close to driving itself, ethicists were debating variations of the thought experiment known as “the trolley problem.” A streetcar is barreling toward three people tied to the tracks. Do nothing and they all die. Pull a level and the streetcar switches tracks — killing somebody else. Do you pull the lever? What if the other person is elderly? A schoolchild?

The dilemma and its variants are often cited as the type of problems artificial intelligence will have to grapple with as it becomes ever more prevalent in our lives. Driverless cars, for example, will have split seconds to make exactly those types of decisions.

In this special report on the future of artificial intelligence, we explore the technology’s implications. Are people ready to trust their lives to driverless cars? What about an AI doctor? Who’s to blame when price-setting algorithms work together to collude?

We also spoke to Armin Grunwald, an adviser to the German parliament tasked with mapping out the ethical implications of artificial intelligence. Grunwald, it turns out, has an answer to the trolley problem.

This article is part of the special report Confronting the Future of AI.

Original Article

[contf] [contfnew]