Tags

, , , , , , ,

Suppose a trolley is hurtling down a track, on which are placed five innocent people with no chance to escape in time. You are standing beside a switch that will redirect the trolley onto a track where stands one innocent person, who also has no chance to escape. Should you flip the switch, and thereby kill one to save five?

Now suppose there is no track onto which the trolley can be redirected; the five innocents will be in its path no matter what happens. Instead of being beside a switch, you are standing on a bridge over the tracks, beside a very fat man looking down over the action. You can push the man over the bridge, knowing his enormous girth will stop the trolley’s movement before it hits the innocents. Should you push the man, and thereby kill one to save five?

Michael Sandel begins his famous course on Justice with this action scene, and it’s a great way to start such a course. This trolley problem, ingeniously introduced by Judith Jarvis Thomson and the late Philippa Foot, is a wonderful way to shock beginning students out of their ethical complacency. For nearly all people faced with this problem agree they would kill one to save five in the first situation but not the second. After hearing one case they think there’s an easy principle by which to decide the right action; after hearing the second, they are forced to admit that there isn’t.

Whether they are meek Stonehill students who feel uncomfortable disagreeing and asking hard questions, or cocky Harvard students who think they already know everything, the trolley problem forces undergraduates to think hard about ethics. It makes us realize that ethics cannot be limited to “common sense”: it makes us see that our untrained “intuitions” are not enough on their own, that there is something to be learned from studying ethics and having special training in the subject.

Still, the trolley problem can be overdone – and typically is. Many analytical ethics courses, including the one I took as an undergraduate at McGill, are effectively about nothing but the trolley problem. If one reads the great thinkers in ethics at all, one reads them merely to provide a theoretical justification for each main side of the problem: Kant for why we shouldn’t push the fat man, Mill for why we should flip the switch. There’s no place for Aristotle or Nietzsche in such a course, let alone Mencius or Śāntideva. The goal is only to hammer out some sort of principle that could allow one to be consistent in both cases. (The most common candidate for such a principle is Thomas Aquinas’s doctrine of double effect, although instructors in such courses usually skirt around giving credit for this principle to someone whose motive in coming up with it so obviously “religious.”)

Beyond introductory courses, many analytical ethicists spend their careers coming up with fanciful hypothetical cases of the trolley sort – “thought experiments,” they are typically called, ways of isolating principles behind our existing “intuitions.” Often more and more conditions are placed on the hypothetical situation in order to elicit the response one hopes for. For example, since one might avoid pushing the fat man for fear of legal consequences, one might instead speak of Thomson’s alternate “transplant” case:

Suppose you are a solitary doctor in a remote wilderness area, perhaps the Canadian Arctic. You have five patients in front of you who each need a different organ transplant: one needs a new heart, one needs a lung and so on. You have the knowledge and equipment to make the transplants, but you don’t have the donated organs, and there is no way they can be shipped to you in time. But a lone explorer walks into your clinic for comprehensive health tests that require he be sedated. The tests turn up very well: all of his organs are in perfect shape. And a thought occurs to you: you could cut him up and take out all of those organs to give to the other five who need them. Nobody would ever find out. Should you cut him up, and thereby kill one to save five?

But one begins to wonder: what sort of ethical exercise is this, anyway? What sort of situations – what sort of life – is one preparing to deal with? True, in most human lives one will face decisions about sacrificing some people’s interests for the sake of others. If one is a medical doctor or in the military, lives may very well be at stake. The trouble is, in these actual situations, the additional factors in one’s decisions – like the possibility of being caught – will be more rather than less. Hypothetical examples like the trolley problem are designed to bring in the economist’s method of ceteris paribus, “with other things being equal.” But in philosophy as in economics, the ceteris are never actually paribus. If one is to make the right decision in a real case, one can’t merely leave out the “extraneous” factors; everything must be part of the decision. Moreover, in many such cases – the trolley cases certainly suggest this – the decision must be made in a split second. If one is to be prepared to do the right thing when a real hard case comes up, shouldn’t one be thinking through similar real hard cases, rather than fanciful science-fiction scenarios? (The reliance on hypothetical cases may be one more reason why surveys find ethicists aren’t actually more ethical than anyone else.) I’ve suggested before that this sort of point is what underlies the contemporary resurgence of “virtue ethics”: a shifting of our philosophical concern away from rare or hypothetical cases to the difficult task of acting better in everyday life.

Sandel, I think, got this right. Begin ethical reflection with the trolley problem as a wonderful pedagogical device to shock students out of their complacency and get them actually thinking. But after that first introductory moment, get them thinking about real cases in their complexity, and the deep thinkers who are justly revered for their sustained reflection on that complexity.