Tags
Alan Turing, Blake Lemoine, Boston University, ChatGPT, conferences, Confucius, David Chalmers, Frans de Waal, Google, nonhuman animals, obligation, pedagogy, phenomenology, Replika, technology
Artificial intelligence is all the rage right now, and for good reason. When ChatGPT first made the news this December, I tested it by feeding it the kind of prompt I might give for a short comparison essay assignment in my Indian philosophy class. I looked at the result, and I thought: “this is a B-. Maybe a B.” It certainly wasn’t a good paper, it was mediocre – but no more mediocre than the passing papers submitted by lower-performing students at élite universities. So at Boston University my colleagues and I held a sold-out conference to think about how assignments and their marking will need to change in an era where students have access to such tools.
As people spoke at the conference, my mind drifted to larger questions beyond pedagogy. One professor in the audience noted she’d used ChatGPT herself enough that when it was down for a couple days she typed in “ChatGPT, I missed you”, and it had a ready response (“I don’t have emotions, but thank you.”) In response a presenter noted a different AI tool called Replika, which simulates a romantic partner – and looks to be quite popular. Replika’s site bills itself as “the AI companion who cares”, and “the first AI with empathy”. All this indicates to me that while larger philosophical questions about AI have been asked for a long time, in the 2020s they are no longer hypothetical.
For the past few decades it has been commonplace to refer to the Turing test as a measure of the difference between humans and computers. In his 1950 paper “Computing machinery and intelligence“, Alan Turing proposed making the question “Can machines think?” more precise by asking instead whether they could do well at an “imitation game” where a neutral interviewer cannot tell the difference between the answers provided by a human being and a computer.
But if the Turing test is all we’ve got, I think we’re in trouble. Many claim that ChatGPT has already passed the test; even those who say that it hasn’t, are still ready to say that it is likely to soon. On Turing’s account, that would be enough to proclaim that large language models (LLMs) like GPT can think.
In his keynote at this year’s Eastern APA, David Chalmers similarly argued that even if large language models can’t think yet, they likely will be able to soon. I am not entirely convinced by the claim that large language models can think, but I’m going to put it aside for the moment, because I think there are deeper and more fundamental issues at stake – especially in ethics.
A decade ago I noted how certain AI systems now needed something like ethics as a part of their programming, because it’s not hard to imagine self-driving vehicles facing a literal case of the trolley problem. But that’s only one side of the ethical picture – AIs as subjects of ethics. What about AIs as objects of ethics? Especially, as objects of ethical obligation? Do we have obligations to them, as we do to our fellow humans?
For about as long as human beings have been making machines, we have treated it as obvious that we have no ethical obligations to those machines. We humans may have obligations to program machines ethically, with respect to their behaviour toward other humans – surely we do have such obligations in the case of self-driving vehicles. But we don’t have obligations to treat the machines well except as extensions of other humans. (If your sister just bought a new $3000 gaming rig and you throw acid on it, you are being immoral – to her.)
But once the machines have become sophisticated enough to pass Turing tests, then new questions start to arise. If you meet a human being through an online game and start sending romantic texts to the point that it becomes a long-distance relationship, and then after months or years you abruptly end the relationship without explanation, many would argue you’ve done something wrong. But if Replika’s technology passes the Turing test, then the text exchange – yours and the partner’s – would look exactly the same as it would have with a human partner. If you suddenly decide to stop using Replika, have you done something wrong there as well?
This question isn’t just hypothetical. Google engineer Blake Lemoine, having interacted enough with its AI system, declared the system was sentient and therefore refused to turn it off, getting fired for his trouble. Many people are tempted to bite bullets on this question and say there’s really no relevant difference between humans and a sufficiently advanced AI – but if they do, then they’re going to have to say that Lemoine was right, or that he was just off about the development status of the technology. (Like it was fine to turn off the AI that had been developed in 2022, but not the one that will have been developed by 2027.)
Most of us, I think, aren’t willing to go there. We believe that there is a morally significant difference between a human being and next year’s iteration of Replika – even if you can’t tell that difference in text chat. But we do need to specify: what is that difference?
The most obvious one is this: a human being would be hurt by being ghosted after a long online relationship. A Replika, even a very advanced one, would not. That is, even if we’re willing to grant that an AI can think, it still can’t feel – and our obligations to it require the latter. To say this, note, is to deny the Replika company’s advertising that a Replika has “empathy” and “cares”. As I think we should.
One of the key things the Turing test, and any similarly functional test, obscures is the distinction between doing something and acting like you’re doing something. A human being who is good at deception can pretend to care about a romantic partner enough to fool everyone – long enough to get access to the partner’s money and run off with it. The deceiver didn’t actually care, but merely acted like they did. We know that to be the case because they ran off with the money – but even if they had died before anyone found out about the plan to run off and they left no trace of that plan having existed, it still would have been the case that they did not really care about the partner. We wouldn’t know that they didn’t care, but it would have been true.
For this reason I find it hard to believe the claim (advanced by Herbert Fingarette) that Confucius didn’t have a concept of human interiority or consciousness. Confucius had to be aware of deception. Frans de Waal persuasively demonstrates that even chimpanzees are capable of deception without having language – faking a hurt limb in order to prevent being attacked further. We primates all know what it is to act one way on the outside and feel and think differently on the inside, even if we don’t use the spatial out/in metaphor to describe that distinction.
Now what constitutes feeling, or interiority, as consciously experienced? That’s a harder question, and it’s what gives the bullet-biters their plausibility. We can say that feeling is fundamentally phenomenological, as a matter of subjective experience – but what does that mean? I think that at the heart of phenomenology in this sense is something like the grammatical second person: I know that I feel, and I recognize you as a you, a being who also feels, because I recognize in you the characteristics that mark me as a feeling being – and I learned to recognize those characteristics in others at the same time that I learned to recognize them in myself.
We recognize feeling in other human beings, and we recognize it in nonhuman animals. So when we talk of beings we have obligations to, we are speaking primarily of humans and possibly of animals. The tricky part is: could a machine be built in which we recognize feeling? How would we know? I am not yet sure how to answer that question. But I don’t think it is a satisfying answer to say “Blake Lemoine was right” – to say Google was doing harm by shutting down an AI project. Even if it passed the Turing test.
Seth Zuihō Segall said:
For me, one of the fundamental differences between organisms and machines is, at least if I am understanding him right, what Heidegger referred to as “sorge” or care. Organism’s enviroments matter to them—they care about the obstacles and affordances in their environments—and this means obstacles and affordances in terms of their survival, enhancement, and/or wellbeing. How did it come about that organisms came to care about their situations? Here we do some handwaving and talk about evolution in some broad sense—and it is an open question whether we can think of evolution as some kind of programming without a programmer. Can machines be programmed to care about what happens to them? How would we be able to tell if they were? Simple verbal statements from them that they cared would not be enough as they can lie or be mistaken in their answers. Conceptually, I am not even sure what it might mean to program a machine to care about its situations as opposed to “acting as if” it cared, how one might go about doing that, whether it might be possible, and what would meet our own personal criteria for believing and feeling it were true. I am skeptical about whether it is even conceptually possible, but we are all at the beginning of a very steep learning curve and only time will tell. What machines do when they “think” is very different from what humans do when we think—but maybe the differences will turn out to unimportant in the end, and beleiving those differences are crucial may just reflect our own anthrocentrism.
Nathan said:
It seems to me, if I’m not mistaken, that Amod’s focal set of questions here concern sentience, such as:
1. “Now what constitutes feeling, or interiority, as consciously experienced?”
2. “We can say that feeling is fundamentally phenomenological, as a matter of subjective experience – but what does that mean?”
3. “The tricky part is: could a machine be built in which we recognize feeling? How would we know?”
Those are difficult questions that require a lot of knowledge to answer well. The third question I haven’t read much about and certainly lack the knowledge to answer. There are some good answers to the first two questions in the animal welfare literature, such as the following passages from Donald M. Broom’s 2014 book Sentience and Animal Welfare (and there is much more information in the same book about these issues—sorry for the wall of text here, but as I said, a lot of knowledge is required to answer these questions well):
The post above says at one point:
I think this statement is a mistake, because it approaches the treatment of machines in terms of the focal set of questions about sentience. I think the treatment of machines is better approached in terms of questions about intrinsic value. It’s wrong to destroy a machine when we recognize that its complex and well-functioning condition has intrinsic value that is not outweighed by the reason we have to destroy it. Clément Vidal and Jean-Paul Delahaye followed this line of thought in their chapter “Universal ethics: organized complexity as an intrinsic value” (in: Georgiev, G. Y., Smart, J. M., Flores Martinez, C. L., & Price, M. E., eds., Evolution, Development and Complexity: Multiscale Evolutionary Models of Complex Adaptive Systems (pp. 135–154), Cham: Springer, 2019). Questions about intrinsic value can be asked about sentient beings as well.
Benjamin C. Kinney said:
Current “AI” is not remotely anything like conscious. Blake Lemoine is wrong. The current concept of generative AI is a thing that understands correlations between words but does not have any connection to the universe of word meanings.
This Ted Chiang article is my favorite explainer: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
It can pass the Turing Test, but that is a sign of the TT’s weakness. Alan Turing incorrectly assumed that a computer would only be able to generate conversation about a topic by understanding the topic.
Modern generative AI is a tool, with no moral role of its own. This is not going to change next year or in 5 years: the whole idea behind modern AI is this idea. Better execution won’t change its moral (non) standing.