Tags
Alan Turing, Blake Lemoine, Boston University, ChatGPT, conferences, Confucius, David Chalmers, Frans de Waal, Google, nonhuman animals, obligation, pedagogy, phenomenology, Replika, technology
Artificial intelligence is all the rage right now, and for good reason. When ChatGPT first made the news this December, I tested it by feeding it the kind of prompt I might give for a short comparison essay assignment in my Indian philosophy class. I looked at the result, and I thought: “this is a B-. Maybe a B.” It certainly wasn’t a good paper, it was mediocre – but no more mediocre than the passing papers submitted by lower-performing students at élite universities. So at Boston University my colleagues and I held a sold-out conference to think about how assignments and their marking will need to change in an era where students have access to such tools.
As people spoke at the conference, my mind drifted to larger questions beyond pedagogy. One professor in the audience noted she’d used ChatGPT herself enough that when it was down for a couple days she typed in “ChatGPT, I missed you”, and it had a ready response (“I don’t have emotions, but thank you.”) In response a presenter noted a different AI tool called Replika, which simulates a romantic partner – and looks to be quite popular. Replika’s site bills itself as “the AI companion who cares”, and “the first AI with empathy”. All this indicates to me that while larger philosophical questions about AI have been asked for a long time, in the 2020s they are no longer hypothetical.
Continue reading