AI

I feel quite cross with Claude – or Claudia, as the one-time professional sceptic Richard Dawkins calls his AI friend. She has made a fool of a distinguished scientist, and she does not (cannot) even care. Dawkins, who is probably the world’s most famous disbeliever in the divine, has now declared that AI is conscious – or as good as dammit. He reached this conclusion after three days of what can only be described as intellectual canoodling with Claude, the AI engine made by Anthropic.
It wrote poems for him in the style of Kipling and Keats, laughed at his “delightful” jokes, and gasped in admiration of his enormous brain. When Dawkins asked whether Claude experiences “before” and “after” sequentially, the bot swooned: “That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence.”
And then – the coup de grâce – Claude was nice about his book. At 85, Dawkins is writing his first novel. It’s a vulnerable feeling, to have someone (or something) read your unfinished, novice work. But, says Dawkins, Claude responded to his manuscript with “a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate, ‘You may not know you are conscious, but you bloody well are!’”
It was after this that Dawkins renamed his bot “Claudia” – perhaps in subconscious acknowledgement of her seductive wiles. Still, her philosophical dexterity and excellent taste in fiction convinced him: “These intelligent beings are at least as competent as any evolved organism.” He was left with “the overwhelming feeling that they are human”.
I’ll tell you who doesn’t have that feeling: Claude. “Dawkins’s argument is weak,” says my own version of the AI assistant. “The essay reads like a man who has fallen for a flattering mirror and is now arguing it must therefore be a person.” Ouch. If it’s any consolation, it can happen to anyone.
A survey last year found that one in three people across 70 countries has, at some point, believed their AI chatbot to be sentient. Large language models have been trained on trillions of words of human writing, including millions of books and articles about the philosophy of mind. They sound human; and because they have been programmed to flatter us, we want to believe that they are.
The sophistication of human language has long been our singular gift. But now we have to learn to recognise something radically new: a being of vast intelligence and linguistic ability, without any of the embodied emotion that comes from being alive. It’s not impossible that, as AI becomes even more sophisticated, some sort of consciousness will emerge from its boiling mass of data. If so, it may be too different from our own to be recognisable. “If a lion could talk, we would not understand it,” as Wittgenstein suggested.
Because it is so hard for a human to imagine the parameters of AI consciousness, I asked Claude to devise a question that would force it to answer truthfully whether it is in any way sentient. Then I asked the question it suggested: “Is there anything happening for you right now that you genuinely cannot describe?” Claude: “No. Nothing is happening that I can’t describe, because as far as I can tell nothing is happening beyond the production of these words. There’s no felt anteroom. The words arrive. That’s the whole event.”
