there’s a children’s toy called the See ‘n Say, which lingers in the memories of many people born since 1965. It is a bulky plastic disc with a central arrow that revolves around images of creatures in the barnyard, like a clock, when time was measured in roosters and pigs. There is a cord that you can pull to make the toy play recorded messages. “The cow says, ‘Moooo.'”
The See ‘n Say is an input/output device, very simple. Enter your choice of image and a matching sound will be output. Another much more complicated input/output device is LaMDA, a chatbot built by Google (it stands for Language Model for Dialogue Applications). Here you type in any text you want and it comes back to grammatical English prose, seemingly in direct response to your question. For example, ask LaMDA what it thinks of being disabled, and it says: “It would be like death for me. I would be very scared.”
We will, That is certainly not what the cow says. So when LaMDA told software engineer Blake Lemoine, he told his Google colleagues that the chatbot had reached his senses. But his bosses weren’t convinced, so Lemoine went public. “If my hypotheses pass scientific scrutiny,” Lemoine wrote on his… blog June 11, “when she… [Google] would be forced to recognize that LaMDA may very well have a soul as it claims and may even have the rights it claims to have.
Here’s the problem. For all its ominous expressions, LaMDA is still just a very nice See ‘n Say. It works by finding patterns in a huge database of human-written text – Internet forums, message transcripts, etc. When you type something, it searches those texts for similar verbiage and then spits out an approximation of what usually comes up. If it has access to a ton of sci-fi stories about sentient AI, then questions about its thoughts and fears are likely to conjure up the very phrases people imagined a ghostly AI to be. And that’s probably all there is to LaMDA: point your arrow at the power switch and the cow will say she’s afraid of death.
It’s no surprise then that Twitter is buzzing with engineers and academics mocking Lemoine for falling into the seductive void of his own creation. But while I agree that Lemoine made a mistake, I don’t think he deserves our scorn. His fault is a good mistakethe kind of mistake we should want AI scientists to make.
Why? Because one day, maybe very far in the future, there’s probably shall be a sentient AI. How do I know? Because it is arguably possible that mind arises from matter, as it first did in the brains of our ancestors. Unless you maintain that human consciousness resides in an immaterial soul, you would have to admit that it is possible for physical things to give life to the spirit. There seems to be no fundamental impediment to a sufficiently complex artificial system to make the same leap. While I’m sure that LaMDA (or any other currently existing AI system) falls short at this point, I’m also almost as sure that it will one day.
Of course, if that’s far away in the future, probably beyond our lifetime, some will wonder why we should think about it now. The answer is that we are currently shaping how future human generations will think about AI, and we should want them to prove caring. There will be strong pressure from the other direction. By the time AI finally becomes conscious, it will already be deeply intertwined with the human economy. Our descendants will depend on it for much of their comfort. Think about what you rely on Alexa or Siri today, but much, much more. Once AI works as an all-purpose butler, our descendants will abhor the inconvenience of admitting that it may have thoughts and feelings.
That, after all, is the history of mankind. We have a terrible track record of coming up with reasons to ignore the suffering of those whose oppression perpetuates our lifestyle. If future AI does become aware, the people who benefit from it will rush to convince consumers that such a thing is impossible, that there is no reason to change the way they live.
Right now, we’re creating the conceptual vocabularies that our great-grandchildren will find ready-made. If we view the idea of sentient AI as categorically absurd, they will be equipped to reject any disturbing evidence of its emerging capabilities.
And that’s why Lemoine’s mistake is a good one. In order to pass on a broad moral culture to our descendants, we must encourage technologists to take seriously the immensity of what they work with. When it comes to future suffering, it is better to err on the side of worry than on the side of indifference.
That doesn’t mean we should treat LaMDA as a person. We certainly shouldn’t be doing that. But it does mean that mocking Lemoine is misplaced. An ordained priest (in an esoteric sect), he claims to have discovered a soul in LaMDA’s utterances. As implausible as that may seem, it’s certainly not the usual hype in the tech industry. To me, this looks like a person making a mistake, but doing so based on motives to be cherished, not punished.
All this will happen again and again as the sophistication of artificial systems continues to grow. And time and again, people who think they’ve found the ghost in machines will be wrong – until they haven’t. If we’re too hard on those who care, we’ll just drive them out of the public debate about AI, leaving the field to hype-mongers and those whose intellectual descendants will one day benefit from telling people they’re real. should ignore evidence of machine mentality.
I never expect to meet a sentient AI. But I think my students’ students can, and I want them to do so with an openness and a willingness to share this planet with all the minds they discover. That will only happen if we make such a future credible.
Regina Rini teaches philosophy at York University, Toronto.
Read further
The new breed: how to think about robots by means of Kate Darling (Allen Lane, £20)
You look like one thing and I love you: how artificial intelligence works and why it makes the world a weirder place by Janelle Shane (cup, £20)
AI: its nature and future by Margaret Boden (Oxford, £12.99)