A few years ago I argued that utilitarian and Kantian ethics, with the trolley problem as their framing question, were suited for programming robots but not for human beings. It turns out I was wrong — not about the human beings, but about the robots.
I was delighted to hear that this fall Michael Sandel has returned to teaching his Justice course at Harvard. He’d gone many years without teaching it, which I think was a shame, because that course does a better job than just about anything else I can think of at introducing people to philosophy. So it’s great to hear that it’s back.
I was twice a TA – or “TF”, for Teaching Fellow, as Harvard calls them – for Justice, now twenty years ago during my PhD. When Sandel interviewed me for the position, it was my favourite job interview I’ve ever had: the only interview where I was grilled on the finer points of Kant and Rawls. It was a proud moment for me because Sandel was skeptical about whether, as a religionist, I’d have the competence to teach the course, but I showed him how much moral and political philosophy I knew.
In those days at least, Justice was the most popular course at Harvard. It was held in the beautiful Sanders Theatre, Harvard’s largest audience space, and was so popular that the students who wanted to take it wouldn’t even fit in that space. That occasionally put us TFs in the position, not exactly standard for graduate students, of being bouncers: I told one student “I’m sorry, you’re not allowed in at the moment”, and she tried to go in anyway so I had to physically block her. Its popularity often made it a target for funny student pranks (see the picture).
A still from a video of Sandel teaching Justice twenty years ago. That’s me in the blue shirt in the back. (But I’m not the prank).Continue reading →
The Good Place, an American comedy-fantasy series created by Michael Schur and airing on NBC, is perhaps the most explicitly philosophical American television show in recent memory. I think it aims to do for moral philosophy what Breaking Bad did for chemistry. (This post speaks of the second season, but does not have spoilers – at least in the sense that it does not reveal any of the show’s twists.) Continue reading →
I’ve been thinking further on the decision/capacity distinctionfirst articulated by Andrew Ollett, and I want to take a further step. So far Andrew and I have merely acknowledged the existence of this distinction – identifying different thinkers on either side and exploring the distinction’s implications for philosophical methodology. But I am, at this point, ready to make a more substantive claim: the “capacity” approaches are better. In ethics, we should be “capacity” rather than “decision” thinkers. I had stressed before that we can and should address the “capacity” approach philosophically and not merely historically; now I want to actually do so, and say that it is correct. Continue reading →
The image of a drowning child is a vivid one – enough to make it a key example in two very different traditions of moral philosophy. In ancient China, Mencius used the image to illustrate humans’ natural inborn moral benevolence: we would all “have a feeling of alarm and compassion” at such a sight, and not out of any form of self-interest. Thousands of years later, in the early 1970s – when Chinese philosophy was known to the West but it would rarely have occurred to a Western philosopher that he should study it – the Australian utilitarian philosopher Peter Singer used the same image. In his famous article “Famine, affluence and morality”, written in 1971 and published 1972, Singer says this:
if I am walking past a shallow pond and see a child drowning in it, I ought to wade in and pull the child out. This will mean getting my clothes muddy, but this is insignificant, while the death of the child would presumably be a very bad thing.
But Singer puts the image to a very different use than Mencius. Continue reading →
Last week the Economist ran a cover story on a philosophical topic: the ethics of robots. Not just the usual ethical question one might ask about the ethics of developing robots in given situation, but the ethics of the robots themselves. The Economist is nothing if not pragmatic, and would not ask such a question if it weren’t one of immediate importance. As it turns out, we are increasingly programming machines to make decisions for us, such as military robots and Google’s driverless cars. And those will need to make decisions of the sort we have usually viewed as moral or ethical:
Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? (Economist, 2 June 2012)
Suppose a trolley is hurtling down a track, on which are placed five innocent people with no chance to escape in time. You are standing beside a switch that will redirect the trolley onto a track where stands one innocent person, who also has no chance to escape. Should you flip the switch, and thereby kill one to save five?
Now suppose there is no track onto which the trolley can be redirected; the five innocents will be in its path no matter what happens. Instead of being beside a switch, you are standing on a bridge over the tracks, beside a very fat man looking down over the action. You can push the man over the bridge, knowing his enormous girth will stop the trolley’s movement before it hits the innocents. Should you push the man, and thereby kill one to save five?
Michael Sandel begins his famous course on Justice with this action scene, and it’s a great way to start such a course. This trolley problem, ingeniously introduced by Judith Jarvis Thomson and the late Philippa Foot, is a wonderful way to shock beginning students out of their ethical complacency. For nearly all people faced with this problem agree they would kill one to save five in the first situation but not the second. After hearing one case they think there’s an easy principle by which to decide the right action; after hearing the second, they are forced to admit that there isn’t. Continue reading →