Tags
David Chalmers, Economist, Immanuel Kant, nonhuman animals, obligation, technology, trolley problem, utilitarianism
Last week the Economist ran a cover story on a philosophical topic: the ethics of robots. Not just the usual ethical question one might ask about the ethics of developing robots in given situation, but the ethics of the robots themselves. The Economist is nothing if not pragmatic, and would not ask such a question if it weren’t one of immediate importance. As it turns out, we are increasingly programming machines to make decisions for us, such as military robots and Google’s driverless cars. And those will need to make decisions of the sort we have usually viewed as moral or ethical:
Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? (Economist, 2 June 2012)