Google’s PaLM AI Is Much Stranger Than Conscious

Last week, Google sent one of its engineers on administrative leave after he claimed to have encountered machine feel on a dialogue agent named LaMDA. Because machine sense is a staple of the movies, and because the dream of artificial personality is as old as science itself, the story went viral and received far more attention than virtually any natural language processing (NLP) story has ever received. That’s too bad. The idea that LaMDA is conscious is nonsense: LaMDA is no more conscious than a pocket calculator. More importantly, the madcap fantasy of the machine sense has once again been allowed to dominate the conversation about artificial intelligence, while much stranger and richer, and potentially more dangerous and beautiful developments are underway.

The fact that LaMDA, in particular, is at the center of attention is frankly a little odd. LaMDA is a dialogue agent. The purpose of dialogue agents is to convince you that you are talking to a person. Absolutely persuasive chatbots are far from cutting edge technology right now. Programs like Project December are already able to: recreating deceased loved ones using NLP. But those simulations are no more vivid than a photo of your late great-grandfather.

Models more powerful and puzzling than LaMDA already exist. LaMDA operates on up to 137 billion parameters, which are broadly the patterns in language that a transformer-based NLP uses to create meaningful text prediction. I recently spoke with the engineers who worked on Google’s latest language model, PaLM, which has 540 billion parameters and is capable of performing hundreds of individual tasks without being specifically trained to perform them. It is a real artificial general intelligence, insofar as it can apply itself to various intellectual tasks without specific training “out of the box”, as it were.

Some of these tasks are clearly useful and potentially transformative. According to the engineers – and just to be clear, I haven’t seen PaLM in action myself, as it’s not a product – if you ask it a question in Bengali, it can answer in both Bengali and English. If you ask him to translate a piece of code from C to Python, he can do it. It can summarize text. It can explain jokes. Then there’s the feature that scared its own developers, requiring a certain distance and intellectual coolness not to panic. PaLM can reason. Or, to be more precise – and precision is very important here – PaLM can reason.

The method by which PaLM reasons is called ‘chain-of-thought prompting’. Sharan Narang, one of the engineers who led the development of PaLM, told me that large language models have never been very good at making logical jumps unless they were explicitly trained to do so. Giving a large language model the answer to a math problem and then asking it to replicate the resources for solving that math problem usually doesn’t work. But in giving a train of thought, you are explaining the method of getting the answer rather than giving the answer itself. The approach is closer to teaching children than programming machines. “If you just told them the answer is 11, they would be confused. But if you break it, they’ll do better,” Narang said.

Google illustrates the process in the following image:

Adding to the general craziness of this feature is the fact that Google’s own engineers don’t understand how or why PaLM is capable of this feature. The difference between PaLM and other models may be the raw computing power at play. It may be that only 78 percent of the language PaLM is trained in is English, expanding the available meanings of PaLM unlike other major language models, such as GPT-3† Or it could be the fact that the engineers changed the way they tokenize math data in the input. The engineers have their guesses, but they don’t feel like their guesses are better than anyone else’s. Simply put, PaLM “has shown capabilities that we haven’t seen before,” said Aakanksha Chowdhery, the co-lead of the PaLM team, who understands PaLM as well as any engineer.

This, of course, has nothing to do with artificial consciousness. “I don’t humanize,” Chowdhery said bluntly. “We’re just predicting language.” Artificial consciousness is a distant dream that remains firmly entrenched in science fiction because we have no idea what human consciousness is; there is no functioning falsifiable statement of consciousness, just a bunch of vague notions. And if there’s no way to test consciousness, there’s no way to program it. You can ask an algorithm to do only what you tell it to do. All we can think of to compare machines to humans are little games, like: Turing’s imitation gamewho ultimately prove nothing.

Where we have arrived instead is somewhere more alien than artificial consciousness. Strangely, a program like PaLM would be easier to understand if it was just conscious. We at least know what the experience of consciousness entails. All of PaLM’s features I’ve described so far are nothing but predictive text. Which word makes sense now? That is it. That’s all. Why would that function result in such huge leaps in the ability to make sense? This technology works with substrates that underlie not only all language, but all meanings (or is there a difference?), and these substrates are fundamentally mysterious. PaLM may have modalities beyond our comprehension. What does PaLM understand that we don’t know how to ask?

Using a word like to understand is loaded at the moment. One problem in grappling with the reality of NLP is the AI ​​hype machine, which, like everything else in Silicon Valley, exaggerates itself. Google claims in its promotional materials that PaLM demonstrates “impressive understanding of natural language”. But what does the word mean? concept mean in this regard? I doubt myself: on the one hand, PaLM and other major language models are capable of understanding in the sense that if you tell them something, its meaning is registered. On the other hand, this does not resemble human understanding at all. “I don’t think our language is good at expressing these things,” Zoubin Ghahramani, the vice president of research at Google, told me. “We have words to map the meaning between sentences and objects, and the words we use are words like concept† The problem is that in a narrow sense you could say that these understand systems like a calculator understands addition, and in a deeper sense they don’t understand it. We have to take these words with a grain of salt.” Needless to say, Twitter conversations and the viral information network in general aren’t particularly good at taking things with a grain of salt.

Ghahramani is excited about the troubling unknown of it all. He’s been working in artificial intelligence for 30 years, but told me that now is “the most exciting time to be in the field,” precisely because of “the speed at which we’re being surprised by the technology.” He sees huge potential for AI as a tool in use cases where, frankly, people are really bad at things, but computers and AI systems are really good at it. “We tend to think about intelligence in a very human-centered way, and that leads us to all sorts of problems,” Ghahramani said. “One is that we are anthropomorphizing technologies that match stupid statistical patterns. Another problem is that we tend to mimic human capabilities rather than complement human capabilities.” Humans aren’t built to find meaning in, say, genomic sequences, but large language models can be. Large language models can find meaning in places where we can only find chaos.

Yet there are enormous social and political dangers here, as well as possibilities for beauty that are still elusive. Large language models don’t produce consciousness, but they do produce persuasiveness imitations of consciousnessthat will only improve dramatically, and will continue to confuse people† If even a Google engineer can’t tell the difference between a dialogue agent and a real person, what hope will there be when this stuff reaches the general public? Unlike machine sense, these questions are real. Answering them will require unprecedented collaboration between humanists and technologists. The nature of the meaning is at stake.

So no, Google has no artificial consciousness. Instead, it builds massively powerful large language systems with the ultimate goal, as Narang said, “to enable a single model that can generalize across millions of tasks and absorb data across multiple modalities.” Honestly, it’s enough to worry about without the sci-fi robots playing on the screens in our heads. Google has no plans to turn PaLM into a product. “We should not get ahead of things in terms of capabilities,” Ghahramani said. “We have to approach all this technology in a cautious and skeptical way.” Artificial intelligence, especially the AI ​​derived from deep learning, tends to rise rapidly during periods of shocking development and then grind to a halt. (See self-driving cars, medical imaging, etc.) But when the jumps come, they come hard and fast and in unexpected ways. Gharamani told me we have to make these jumps safely. He is right. We are talking about a machine with general meaning: it would be good to be careful.

The fantasy of feeling by artificial intelligence is not only wrong; it’s boring. It is the dream of innovation through ideas received, the future for people whose minds never escaped the enchantment of 1930s science fiction series. The questions forced upon us by the latest AI technology are the most profound and the simplest; these are questions for which, as always, we are totally unprepared. I worry that people just don’t have the intelligence to deal with the consequences of artificial intelligence. The line between our language and the language of the machines blurs, and our ability to understand the distinction dissolves in the blur.

Leave a Comment

Your email address will not be published. Required fields are marked *