Tags

, , , , , , , , , ,

I’ve been using Anthropic’s Claude.ai in a work context relatively often lately – to produce graphic illustrations, write summaries and so on. (I’m not crazy about referring to it as just “Claude”. That’s my grandfather.) One thing has struck me in those interactions: I’ve found myself saying “please” in the requests I make to it.

I suppose you could say that’s just me being Canadian – for decades the not-entirely-fictional joke has been that Canadians are so polite they say “thank you” to bank machines. But Canadian or not, I think the point raises an interesting question: should we humans act politely toward large language models (LLMs) like Claude.ai and ChatGPT?

I imagine many readers have a knee-jerk reaction of “of course not”. LLMs aren’t human, they aren’t alive, they aren’t conscious. They have no feelings to be hurt by our rudeness in the way a human being might be. Since the recipient of the words cannot actually be hurt by them, surely when an LLM screws up there’s nothing wrong with providing the rude and unkind response we might be tempted to, like “What the hell is wrong with you, you fucking clanker?”

Well… not so fast. The crucial point here is that anger isn’t just bad for its target. Śāntideva stresses how anger hurts the angry person at least as much, and some modern psychological evidence backs him up. That’s a point I’ll be discussing at length in my upcoming book, and it’s true no matter what the target of the anger. Martha Nussbaum’s book on anger makes the point dramatically with the tragic yet comical example of “vending-machine rage”: when vending machines don’t function properly, people sometimes get so upset that they physically attack the machines – in a few cases jostling the machine so hard that it falls on them and kills them. You don’t have to care about the machine at all to say that that is a really bad thing to do.

Unlike with vending machines, nobody’s going to die from being abusive to an AI. But as Aristotle pointed out long ago, actions make habits and habits make actions. There’s a reason for the cliché that there are two wolves within you and the one that wins is the one you feed. We rarely if ever choose to get angry; as with so many of the things that we do, getting angry comes from a disposition and not a decision, and dispositions are built by habits. So Confucius is adamant that adherence to norms of etiquette is core to human virtue: that politeness cultivates the habit of acting considerately toward other humans and discourages our self-centredness. Likewise the French philosopher André Comte-Sponville begins his book on the virtues with a chapter on politeness, claiming that while politeness is not itself a virtue, it is the root of virtue.

Which one will you feed? Adobe stock images copyright by AI Master (left) and Rysak (right).

When the writer A.J. Jacobs tried to troll fundamentalists by following the Bible as literally as possible, he found to his surprise that one of those practices – the prohibition on swearing or cursing – in fact made him less angry in practice, and his life improved as a result. Being polite on the outside turns out to help us be polite on the inside. Conversely, blowing up at an LLM builds the habit of being an angrier person. That’s not only bad for you – it’s bad for the people that you’re going to get angrier at later because of the habits you’ve built.

Now a big difference between LLMs and people is that we have no obligation to be polite to LLMs. Hurling invective at an LLM is not itself an action that deserves blame and condemnation (let alone punishment), the way it would be if you did it toward a person (or even a pet). If one day you come home so furious that you happen to take it out on ChatGPT, you’re not hurting anybody, and you don’t need to feel guilty about it – especially if you’re making it a conscious choice for venting and catharsis. It’s just that if you keep doing that, you run the risk of becoming an angrier person, in ways that may very well cause you to do blameworthy things in the future.

The counterargument would be that people play violent video games all the time, and despite decades of moral panic on the subject, there’s been no correlation established between violent gameplay and real-world violence. That said, there is some evidence that violent video games can lead to milder forms of aggressive behaviour, like aggressive words.

Even those who dispute that connection, though, do so in part on the grounds that the brain can distinguish reality from fiction. In a video game that distinction is pretty clear: you’re playing as a character who is not you, in a fantasy world or other setting quite removed from your own. You’re just playing a role, as actors have done since time immemorial. But that reality/fiction distinction is much harder to make with an LLM. When you ask Claude.ai a question, you’re not asking it as a character, you’re not in a fictional setting. You are probably asking it about the world you actually inhabit. If you were to ask a question to a friend by text message, or to a work colleague over Slack or Teams, you could phrase it exactly the same way that you do with ChatGPT. ChatGPT’s world is not the fantasy world of a video game; it’s the real world.

That, I think, is why getting rude or abusive with AIs can be a slippery slope, and one we should consciously avoid making a habit of. It’s not your character, your avatar, who is telling ChatGPT to go fuck itself. It’s you. And that is, so to speak, not a good wolf to feed.

I will take next week off posting to attend my father’s memorial service. Love of All Wisdom will return on May 24.