, , , , , ,

Last week the Economist ran a cover story on a philosophical topic: the ethics of robots. Not just the usual ethical question one might ask about the ethics of developing robots in given situation, but the ethics of the robots themselves. The Economist is nothing if not pragmatic, and would not ask such a question if it weren’t one of immediate importance. As it turns out, we are increasingly programming machines to make decisions for us, such as military robots and Google’s driverless cars. And those will need to make decisions of the sort we have usually viewed as moral or ethical:

Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? (Economist, 2 June 2012)

These sorts of questions lend appeal to studying the ethics of the trolley problem, beyond the pedagogical uses I recommended before. The trolley-style ethics of thought experiments works to find a middle ground between utiltarianism and Kantianism, and much of the appeal of both of those approaches to ethics is in their relatively simple decision procedure. Each approach allows ethical decision-making to be reduced to one single, simple question: of the actions available to me, which one will bring about the greatest happiness for the greatest number of people? Or, what maxim can I act by which I can also will to be a universal law? In practice, very few human beings make their decisions according to such a simple principle. But for computers it is a different story: everything in computing is about algorithms, about taking on every complex procedure by reducing it to a structured set of simpler ones, which can themselves be reduced in turn, and so on recursively until you’re just left with ones and zeros. It’s much easier to make a robot act according to Kant than Aristotle.

That’s not to say that this is easy – even the analytic philosophers who work on the trolley problem have never really been able to agree on an answer to it. On the other hand, they do tend to agree that the answer should be found somewhere in between unbending Kant and flexible utilitarianism. The actual decisions made by the robots might be different only in certain extreme limit cases; one might be able to reach a compromise that the robots were to decide their actions on the basis of something like Thomas Aquinas’s doctrine of double effect, as long as credit was not given to its originator.

But the ethics of robots go considerably deeper. What the pragmatic Economist doesn’t address is the long-range, bigger-picture questions. Robots are going to have to make the kinds of decisions that humans think of as moral ones. To what extent does that give them moral status? The Economist says “As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency.” To what extent is this mere appearance, and to what extent is it something more?

The first objection to much of this line of thinking is that the robots aren’t really making moral decisions: their programmers just do that for them. But a very similar question can be and has been raised about human beings: we too, after all, are the products of our makeup and our circumstances. The nature of the factors presumed to determine human behaviour have changed across time and place – God, karma, genetics, upbringing – but free will remains elusive, a perennial question. Whether we consider robots moral agents may depend on the details of our answer to that very difficult question.

And this fact – that it’s difficult to determine robots’ status as moral agents – raises an even thornier question, about whether robots of sufficient complexity should be objects of ethical concern. Not just “can robots make moral decisions?” but “is it wrong to kill a robot?”

Here, we might appear to face an odd inversion of some of the questions typically asked about (nonhuman) animals. Animals – we think – are sentient, they have some sort of subjectivity or consciousness, they can experience happiness and suffering, and so they are an object of concern for traditions from utilitarianism to Buddhism that base their concerns on happiness and suffering. But they are not rational beings, and so they get little concern from Kantians who see morality as based on mutual respect between rational beings. Robots of sufficient complexity look like the very opposite of animals: they are rational but have no consciousness.

But even this characterization is more complex than it first seems. Kantian rationality, as I understand it, requires some sort of conception of free will, an ability to take the first-person standpoint – so it too seems to depend on subjectivity. But eventually even the idea that machines lack subjectivity will become problematic. How do we know that they lack it?

We don’t take solipsism seriously – we assume that those around us are conscious subjects like ourselves – because it seems so obvious that they act just like we do in the respects relevant to consciousness. But we regularly build computers to act in ever more humanlike ways. I doubt it will be more than a couple decades before their behaviour starts passing Turing tests – becoming indistinguishable to us from another human. It’s feasible now to imagine us being able to build entities that outwardly resemble David Chalmers’s philosophical zombies – beings just like us but without the inner experience of consciousness or subjectivity. And some are ready to deny the existence of subjectivity at all. This includes behaviourists, and on some interpretations it even includes Confucians. On that account, if the robots act just like us and fit into society just like us, then they are just like us. And we have as many obligations to themselves as we do to each other.

I don’t buy that position where philosophical zombies are just like us, and one could take this argument as a reductio ad absurdum against it. But it doesn’t have to be that. The denier of consciousness could just as well “bite the bullet” and say that we indeed have obligations toward our mechanical creations.