I’ve recently been enjoying Joss Whedon‘s underrated science-fiction TV series Dollhouse. Whedon’s ingenious plot twists and a strong supporting cast have made the show highly enjoyable, at least since the middle of the first season; beyond that, the show’s premise is bait for philosophers, especially those who focus on the ethics of technology or enjoy “thought experiments.” It’s about a secret operation that erases people’s memories and personalities and “imprints” them with completely new ones. Given the rapid pace of advances in contemporary neuroscience, it is not entirely far-fetched to say that such a process could become feasible within my lifetime; and it raises a great deal of questions, familiar to Buddhists, about the nature of personal identity.
Last Friday’s episode, Belonging, implicitly makes a further point about the good life. (Spoiler warning, if you haven’t seen this episode.) Sierra, played by Dichen Lachman, was once pursued by an obsessive rich businessman, who planned elaborate setups to seduce her. When it became clear that he could never win her, he drugged her with psychotics so that she could be committed to the Dollhouse. (The Dollhouse’s policy is that sane people can only become dolls if they give their consent.) Then he frequently “hires” her to be imprinted with a personality that will love him and have sex with him.
When the Dollhouse staff find out how she arrived there, they try to forbid the businessman from using her again – but he has connections to their superiors. He makes a threat to fire them, or possibly worse, if they don’t accede to his demand: that she be imprinted as his lover, to live with him forever.
The show clearly expects viewers to react with horror at this prospect, and we do – it looks like a fate worse than death. It even causes one of the show’s most amoral characters to show pangs of conscience and prevent this from happening. But here’s the thing: if pleasure is the only intrinsic good, as Neil Sinhababu has claimed recently and others (including Jeremy Bentham and to some extent Epicurus) have claimed in the past, then surely the businessman’s intended fate for Sierra is a fine and good one. She won’t remember the bad things he’s done to her or her former hatred of him; she’ll be blissfully in love with him, happily spending the rest of her life in pleasurable days. Before her capture she would have found this the most loathsome fate imaginable; but that doesn’t matter now, because she won’t anymore. (The thought experiment here is comparable to Robert Nozick’s experience machine, though darker and more disturbing, and therefore more provocative.)
There are a couple of ways for ethical hedonists to weasel around the problem: to say, for example, that knowing what a sleazeball this man is, he’ll probably treat her badly and therefore she won’t experience much pleasure. But it may well be that she’s programmed to love that mistreatment, to crave it. Or they might say that the man needs to be punished as a deterrent to prevent people acting so cruelly in the future – but on a hedonistic theory, why should we deter people from creating more pleasure?
The most honest and thought-provoking reply I can envision (and I suspect this is the approach Neil might take?) is to “bite the bullet,” to accept the unpalatable consequence of a hedonistic theory and say that indeed, in a scenario like this one, it is good for Sierra’s life to become filled with pleasure at loving and serving the man she once hated. Our repugnance at Sierra’s fate comes merely from unreliable “emotional perception,” an unreliable perception comparable to our belief that the earth is flat; whereas our knowledge that pleasure is the only real good comes from the “phenomenal introspection” that is a more reliable guide.
For such a bullet-biting to work, of course, the argument for pleasure as the only good must itself work – and must be ironclad in order to persuade us to give up such a strongly felt reaction against it. Examples like Dollhouse give us prima facie reasons, at least, to be highly skeptical of such arguments. And in the end, as I noted, I don’t think they do work. So the staff are right to save Sierra.
Given the rapid pace of advances in contemporary neuroscience, it is not entirely far-fetched to say that such a process could become feasible within my lifetime
FWIW, I’ll weigh my expert opinion strongly against this. We’re getting increasingly good at “reading” from the brain, but that’s only so useful. First, it’s not clear that the “reading” problem is even particularly solvable at a fine-grained level. If the functional structure is completely dynamic at a certain level (a given neuron participates in one function at one moment, but 2ms later it’s participating in a different function), it may be as hard as the “writing” problem. Second, the “writing” problem is grotesquely unsolvable at this point; the technical baseline is several ridiculous paradigm shifts away (a stimulating mechanism that picks out individual neurons), and that’s before we can really even start addressing the scientific challenges.
So, my professional guess is that brain-imprinting will not be available this century.
Well, that’s why I hedged my bets with the “not entirely far-fetched” phrase. If I had to make a wager one way or another, I’d follow your professional guess. But science can surprise us. As late as 1950, well informed people believed it was impossible for human beings to travel to the moon. It is extremely far-fetched to imagine that personality imprinting would happen in quite the way it happens on Dollhouse, where it’s mostly the work of one solitary mad genius and doesn’t even seem to involve any invasive surgery. But for some form of the process to happen… unlikely, no doubt, but thinkable enough that it’s worth trying to imagine the implications.
Oh, absolutely- there’s certainly nothing wrong with considering the idea and its implications. My thoughts bear nothing whatsoever on your argument or ideas, they’re just intended to provide background information to the science fiction.
Predictions about scientific progress are fun, especially in neuroscience. Artificial Intelligence was “30 years away” for at least 40-50 years until people got wise and stopped predicting. New paradigm advances come unexpectedly, it’s hard to foresee when or where, but brain-writing is missing both theoretical and technical leaps, probably multiple. In the 1950s, most people didn’t know it, but lunar landings didn’t require any theoretical leaps: rocketry and orbital dynamics were already out there.