Tags

, , , , , , , , , , , , , ,

Artificial intelligence is all the rage right now, and for good reason. When ChatGPT first made the news this December, I tested it by feeding it the kind of prompt I might give for a short comparison essay assignment in my Indian philosophy class. I looked at the result, and I thought: “this is a B-. Maybe a B.” It certainly wasn’t a good paper, it was mediocre – but no more mediocre than the passing papers submitted by lower-performing students at élite universities. So at Boston University my colleagues and I held a sold-out conference to think about how assignments and their marking will need to change in an era where students have access to such tools.

As people spoke at the conference, my mind drifted to larger questions beyond pedagogy. One professor in the audience noted she’d used ChatGPT herself enough that when it was down for a couple days she typed in “ChatGPT, I missed you”, and it had a ready response (“I don’t have emotions, but thank you.”) In response a presenter noted a different AI tool called Replika, which simulates a romantic partner – and looks to be quite popular. Replika’s site bills itself as “the AI companion who cares”, and “the first AI with empathy”. All this indicates to me that while larger philosophical questions about AI have been asked for a long time, in the 2020s they are no longer hypothetical.

For the past few decades it has been commonplace to refer to the Turing test as a measure of the difference between humans and computers. In his 1950 paper “Computing machinery and intelligence“, Alan Turing proposed making the question “Can machines think?” more precise by asking instead whether they could do well at an “imitation game” where a neutral interviewer cannot tell the difference between the answers provided by a human being and a computer.

But if the Turing test is all we’ve got, I think we’re in trouble. Many claim that ChatGPT has already passed the test; even those who say that it hasn’t, are still ready to say that it is likely to soon. On Turing’s account, that would be enough to proclaim that large language models (LLMs) like GPT can think.

In his keynote at this year’s Eastern APA, David Chalmers similarly argued that even if large language models can’t think yet, they likely will be able to soon. I am not entirely convinced by the claim that large language models can think, but I’m going to put it aside for the moment, because I think there are deeper and more fundamental issues at stake – especially in ethics.

A decade ago I noted how certain AI systems now needed something like ethics as a part of their programming, because it’s not hard to imagine self-driving vehicles facing a literal case of the trolley problem. But that’s only one side of the ethical picture – AIs as subjects of ethics. What about AIs as objects of ethics? Especially, as objects of ethical obligation? Do we have obligations to them, as we do to our fellow humans?

For about as long as human beings have been making machines, we have treated it as obvious that we have no ethical obligations to those machines. We humans may have obligations to program machines ethically, with respect to their behaviour toward other humans – surely we do have such obligations in the case of self-driving vehicles. But we don’t have obligations to treat the machines well except as extensions of other humans. (If your sister just bought a new $3000 gaming rig and you throw acid on it, you are being immoral – to her.)

But once the machines have become sophisticated enough to pass Turing tests, then new questions start to arise. If you meet a human being through an online game and start sending romantic texts to the point that it becomes a long-distance relationship, and then after months or years you abruptly end the relationship without explanation, many would argue you’ve done something wrong. But if Replika’s technology passes the Turing test, then the text exchange – yours and the partner’s – would look exactly the same as it would have with a human partner. If you suddenly decide to stop using Replika, have you done something wrong there as well?

This question isn’t just hypothetical. Google engineer Blake Lemoine, having interacted enough with its AI system, declared the system was sentient and therefore refused to turn it off, getting fired for his trouble. Many people are tempted to bite bullets on this question and say there’s really no relevant difference between humans and a sufficiently advanced AI – but if they do, then they’re going to have to say that Lemoine was right, or that he was just off about the development status of the technology. (Like it was fine to turn off the AI that had been developed in 2022, but not the one that will have been developed by 2027.)

Most of us, I think, aren’t willing to go there. We believe that there is a morally significant difference between a human being and next year’s iteration of Replika – even if you can’t tell that difference in text chat. But we do need to specify: what is that difference?

The most obvious one is this: a human being would be hurt by being ghosted after a long online relationship. A Replika, even a very advanced one, would not. That is, even if we’re willing to grant that an AI can think, it still can’t feel – and our obligations to it require the latter. To say this, note, is to deny the Replika company’s advertising that a Replika has “empathy” and “cares”. As I think we should.

One of the key things the Turing test, and any similarly functional test, obscures is the distinction between doing something and acting like you’re doing something. A human being who is good at deception can pretend to care about a romantic partner enough to fool everyone – long enough to get access to the partner’s money and run off with it. The deceiver didn’t actually care, but merely acted like they did. We know that to be the case because they ran off with the money – but even if they had died before anyone found out about the plan to run off and they left no trace of that plan having existed, it still would have been the case that they did not really care about the partner. We wouldn’t know that they didn’t care, but it would have been true.

For this reason I find it hard to believe the claim (advanced by Herbert Fingarette) that Confucius didn’t have a concept of human interiority or consciousness. Confucius had to be aware of deception. Frans de Waal persuasively demonstrates that even chimpanzees are capable of deception without having language – faking a hurt limb in order to prevent being attacked further. We primates all know what it is to act one way on the outside and feel and think differently on the inside, even if we don’t use the spatial out/in metaphor to describe that distinction.

Now what constitutes feeling, or interiority, as consciously experienced? That’s a harder question, and it’s what gives the bullet-biters their plausibility. We can say that feeling is fundamentally phenomenological, as a matter of subjective experience – but what does that mean? I think that at the heart of phenomenology in this sense is something like the grammatical second person: I know that I feel, and I recognize you as a you, a being who also feels, because I recognize in you the characteristics that mark me as a feeling being – and I learned to recognize those characteristics in others at the same time that I learned to recognize them in myself.

We recognize feeling in other human beings, and we recognize it in nonhuman animals. So when we talk of beings we have obligations to, we are speaking primarily of humans and possibly of animals. The tricky part is: could a machine be built in which we recognize feeling? How would we know? I am not yet sure how to answer that question. But I don’t think it is a satisfying answer to say “Blake Lemoine was right” – to say Google was doing harm by shutting down an AI project. Even if it passed the Turing test.