Tags
David Chalmers, Economist, Immanuel Kant, nonhuman animals, obligation, technology, trolley problem, utilitarianism
Last week the Economist ran a cover story on a philosophical topic: the ethics of robots. Not just the usual ethical question one might ask about the ethics of developing robots in given situation, but the ethics of the robots themselves. The Economist is nothing if not pragmatic, and would not ask such a question if it weren’t one of immediate importance. As it turns out, we are increasingly programming machines to make decisions for us, such as military robots and Google’s driverless cars. And those will need to make decisions of the sort we have usually viewed as moral or ethical:
Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? (Economist, 2 June 2012)
These sorts of questions lend appeal to studying the ethics of the trolley problem, beyond the pedagogical uses I recommended before. The trolley-style ethics of thought experiments works to find a middle ground between utiltarianism and Kantianism, and much of the appeal of both of those approaches to ethics is in their relatively simple decision procedure. Each approach allows ethical decision-making to be reduced to one single, simple question: of the actions available to me, which one will bring about the greatest happiness for the greatest number of people? Or, what maxim can I act by which I can also will to be a universal law? In practice, very few human beings make their decisions according to such a simple principle. But for computers it is a different story: everything in computing is about algorithms, about taking on every complex procedure by reducing it to a structured set of simpler ones, which can themselves be reduced in turn, and so on recursively until you’re just left with ones and zeros. It’s much easier to make a robot act according to Kant than Aristotle.
That’s not to say that this is easy – even the analytic philosophers who work on the trolley problem have never really been able to agree on an answer to it. On the other hand, they do tend to agree that the answer should be found somewhere in between unbending Kant and flexible utilitarianism. The actual decisions made by the robots might be different only in certain extreme limit cases; one might be able to reach a compromise that the robots were to decide their actions on the basis of something like Thomas Aquinas’s doctrine of double effect, as long as credit was not given to its originator.
But the ethics of robots go considerably deeper. What the pragmatic Economist doesn’t address is the long-range, bigger-picture questions. Robots are going to have to make the kinds of decisions that humans think of as moral ones. To what extent does that give them moral status? The Economist says “As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency.” To what extent is this mere appearance, and to what extent is it something more?
The first objection to much of this line of thinking is that the robots aren’t really making moral decisions: their programmers just do that for them. But a very similar question can be and has been raised about human beings: we too, after all, are the products of our makeup and our circumstances. The nature of the factors presumed to determine human behaviour have changed across time and place – God, karma, genetics, upbringing – but free will remains elusive, a perennial question. Whether we consider robots moral agents may depend on the details of our answer to that very difficult question.
And this fact – that it’s difficult to determine robots’ status as moral agents – raises an even thornier question, about whether robots of sufficient complexity should be objects of ethical concern. Not just “can robots make moral decisions?” but “is it wrong to kill a robot?”
Here, we might appear to face an odd inversion of some of the questions typically asked about (nonhuman) animals. Animals – we think – are sentient, they have some sort of subjectivity or consciousness, they can experience happiness and suffering, and so they are an object of concern for traditions from utilitarianism to Buddhism that base their concerns on happiness and suffering. But they are not rational beings, and so they get little concern from Kantians who see morality as based on mutual respect between rational beings. Robots of sufficient complexity look like the very opposite of animals: they are rational but have no consciousness.
But even this characterization is more complex than it first seems. Kantian rationality, as I understand it, requires some sort of conception of free will, an ability to take the first-person standpoint – so it too seems to depend on subjectivity. But eventually even the idea that machines lack subjectivity will become problematic. How do we know that they lack it?
We don’t take solipsism seriously – we assume that those around us are conscious subjects like ourselves – because it seems so obvious that they act just like we do in the respects relevant to consciousness. But we regularly build computers to act in ever more humanlike ways. I doubt it will be more than a couple decades before their behaviour starts passing Turing tests – becoming indistinguishable to us from another human. It’s feasible now to imagine us being able to build entities that outwardly resemble David Chalmers’s philosophical zombies – beings just like us but without the inner experience of consciousness or subjectivity. And some are ready to deny the existence of subjectivity at all. This includes behaviourists, and on some interpretations it even includes Confucians. On that account, if the robots act just like us and fit into society just like us, then they are just like us. And we have as many obligations to themselves as we do to each other.
I don’t buy that position where philosophical zombies are just like us, and one could take this argument as a reductio ad absurdum against it. But it doesn’t have to be that. The denier of consciousness could just as well “bite the bullet” and say that we indeed have obligations toward our mechanical creations.
Ethan Mills said:
Very interesting! I’m not sure Kant himself would say that robots are rational beings. Probably he would deny that we should have rational hope that they have souls and real freedom, as we should with human beings. But I’m just cynical enough to think that may be a manifestation of Kant’s Christian upbringing more than anything inherent to his ethical theory, so Kantians could and probably should say that advanced robots would be rational, both in the theoretical and practical senses.
Utilitarians, on the other hand, would ask if robots can experience pleasure. That’s a much trickier question. If they really are Chalmers-style philosophical zombies, they lack any experience at all, much less pleasurable experiences. So are they not morally considerable? Or is someone like Dennett right that *we* don’t really have experience in the way we think we do (i.e., there are no qualia), so maybe robots and humans are relevantly similar when it comes to experiencing (or “experiencing”) pleasure.
Maybe it’s just because I’m too much of a science fiction fan, but I’m always inclined to think we should be nice to any advanced robots that we might create, especially once they start to regularly pass Turing tests. At that point, I’d be in favor of granting them personhood, or at least near-personhood. If anything, maybe that will keep them from revolting and killing us! I also think maybe we need a separate category for some animals such as great apes and dolphins, although we shouldn’t be needlessly cruel to other animals since we can be pretty sure they do feel pain and pleasure. Maybe we need a continuum of moral consideration rather than the all-or-nothing idea that Kantianism tends toward.
Ben said:
Excellent thoughts, Ethan. (And Amod, too.) I really want to contribute to this discussion, since it is a topic near and dear to my heart, but I think you both have it pretty well covered. Ultimately, we may never know robots’ philosophical state; it’s (almost by definition) impossible to know another entity’s subjective experience. Thus, it seems tempting to have a continuum- a sort of probabilistic moral state. “You might be an object of ethical concern, so I will take some precautions for treating them morally. As we become increasingly convinced of your value (by whatever criteria), the way we treat you will approach/equal the way we treat other humans.”
Yet, I imagine that if robots do have a subjective experience someday, this will be a difficult line to tread. Rumor has it that sentient, conscious individuals do not like being treated as even 3/5 of a person.
Not that I have a better solution, mind you. I can only cross my fingers for advances in brain, artificial-intelligence, and complex-systems science, such that we get a better sense for what kind of complexity can give rise to actual subjectivity.
clemens said:
thanks for the article! reading this made me think how much morals, in the sense that they are some kind of rules, are actually adequate for humans, in difference to an ethics, which tries to conceive “the good life”. you mentioned above, that for robots a kantian approach (which i identified here more broadly with morals, because that way the utilitarian approach is also coverend and put into opposition to the aristotelian approach) is easier. as long as we think there´s something like free human agency (wether this freedom is constituted by the will or something else is not as important as the fact of freedom itself), then there´s a clear difference between humans and robots (since robots are by being programed automatically not free). following a kind of moral that could potentially be followed by robots would thus mean to produce human agents, that do not neither need to be free, nor act as if they were. to me, this article seemed like filling a centuries old gap, which exists since the imagination of humans as single-motive driven entities (pleasure, self-interest and other concepts, or now more pragmatic: programs).
the question of the personhood of robots would then have to be turned around, as in how far the allowing of robots carrying out moral decisions does turn humans robot-like (which means in the end, turning them into non-free agents). that´s what i was alluding to in the beginning, asking how much these kind of morals are actually fit for humans.
Ryan said:
Interesting discussion! I’m not well-versed in philosophical ideas, but I do work at a company that makes robots for the military and I think about this stuff.
In the near future, it’s not unlikely that robots could be programmed to make decisions that have serious ethical implications. Suppose visual recognition software on a military drone could detect a group of men running across a field: the officer who configures it could well decide that the fact they were in the field — or running — or in a group — was sufficient justification for the drone to open fire on them. Realistically, US armed forces in 2012 probably wouldn’t configure a drone to shoot without human authorization, but there’s still the risk that the computer’s rules might bias human thinking.
“Look, It’s dudes in the field! Get them before they get to the village and the system changes the rules of engagement.”
As for the self-driving car that has to decide how to dodge a pedestrian, I expect that designers will try to get around that problem in other ways, such as giving the car a fast reaction/braking time, making it go at reasonable speeds, and sharing information with other vehicles and sensors.
I think it’s unlikely that machines will be imbued with sophisticated ethical frameworks any time soon. Artificial intelligence is nowhere near good enough to make that feasible. However, I can see a kind of machine proto-ethics emerging as software becomes more adaptive, using an evolutionary process to experiment with different ways of measuring and reacting to the data.
As that happens, software agents may well evolve useful rules for competing and cooperating with one another as they do their human masters’ bidding. At first, the rules would probably revolve around very basic stuff like how to exchange data and thwart rogue software, but they’d grow more complex over time. Eventually, artificial intelligences might acquire the stirrings of moral instincts for getting along in their own world. And, as their intelligence rose, explicitly understood systems of ethics. It’s hard for me to imagine that the ethics of intelligent software will have a lot in common with human ethics, since the biological drives, reasoning abilities, and needs of both communities will be largely alien to one another.
Of course, humans will want machines to serve human needs, but the problem is, we can’t make them BE human. And the more autonomy we give them, the less they will really need us for anything.
As for free will and consciousness in robots, I’ve suspected for a while that consciousness depends on a certain kind of architecture. The analogy I would give is two mirrors. In most configurations, the mirrors are not conscious. But, face them at one another, so there’s an infinite recursion, and a conscious loop comes into being. Well, perhaps not with mirrors, but I believe that idea points in the right direction. I think that inside the human brain, there’s some sort of complex synchronization that occurs in a vast network of parts, this loop of recursive self-knowing that tightens and tightens until it becomes… something more, something that science doesn’t really understand right now. Whatever that “something more” is, machines and software right now don’t have it, except maybe on some dumb insect level. But, one day, they may.
JimWilton said:
Your mirror analogy is interesting. In Tibetan Buddhist thought, the mind is sometimes referred to by the metaphor of a “cosmic mirror”. I understand this to refer to the cognizant or self-aware aspect of mind as well as the notion that the mind is not separate from and reflects its environment. Another analogy is that mind is like a crystal ball sitting on a piece of brocade — it reflects its environment without being affected or altered by its environment.
Jesse said:
No need to delve into metaphysics where robots are concerned. Just look at their programming…
If they are programmed to take YOUR past behavior into account in regards to how THEY will behave in regards to you the future, then it would be best to treat them as if they were feeling beings – because that is what that such reactions are for.
It is irrelevant whether the robot is aware of what it is doing, or not. If it can take these factors into account then you’d have to treat it as if it CAN feel, because mistreating it or causing it harm may cause it to react poorly to you in the future. It may avoid you, react defensively, or even aggressively in self-defense if its programming allows for that.
Complex social interactions tend to start from the game theory basis of ‘trust assumptions’ which are generally based on the ‘Treat others as you would be Treated’ paradigm.
Once an entity is programmed to consider these factors, it simply does not matter in the slightest whether the system is self aware – for all social purposes it will be able to treat YOU as an ethical partner/opponent.
You could even constrain its behavior by threatening it with legal sanctions and punishment, so long as it was sophisticated enough to analyze that threat.
Note that I’m not saying anything about what I *think* about whether self-aware robots are possible or how I *believe* they should be treated, I’m simply saying that once an analytical system begins to take all these social judgement factors into account in its decision making, it technically doesn’t MATTER whether it is self-aware any more, in terms of outcome.
Jesse said:
If you really want to study a current case for AI ethics, don’t look at the battlefield – those drones can’t make decisions, they’re just missiles with more missiles attached.
Look instead to wall street’s trading programs, which must make actual decisions at speeds far faster than humans can track.
These trading algorithms are in fact simple AI’s that have to make true gaming decisions, which ultimately DO have ethical ramifications, though they don’t think of them that way. They’re just trying to optimize their actions while reflecting on the likely actions and reactions of other players…
… which is the basis of all Ethics, so if any area of AI can be said to engaged with these problems today, those are the ones exploring that frontier.
Luckily, they’ve been programmed by investment bankers and been handed control of much of the world’s money supply.
Good luck with that.