Artificial Intelligence Data AI Problem Solving

Google’s powerful artificial intelligence spotlights a human cognitive glitch

Through

Solving AI problems with artificial intelligence

Words can have a powerful effect on people, even if they are generated by a thoughtless machine.

It is easy for people to confuse fluent speech with fluent thinking.

When you read a sentence like this, your past experience leads you to believe that it was written by a thinking, feeling human being. And in this case there is indeed a human who types these words: [Hi, there!] But today, some sentences that seem remarkably human are actually generated by AI systems trained on massive amounts of human text.

People are so used to assuming that fluent language comes from a thinking, human sense that evidence to the contrary can be difficult to understand. How are people likely to navigate this relatively unfamiliar area? Due to a persistent tendency to associate fluid expression with fluid thinking, it is natural—but potentially misleading—to think that if an artificial intelligence model can express itself fluidly, it means that it also thinks and feels just like humans.

As a result, it’s perhaps not surprising that a former Google engineer recently claimed that Google’s AI system LaMDA has a self-esteem because it can eloquently generate text about its alleged feelings. This event and the subsequent media attention led to a number from rightly skeptical Article and messages on the claim that computational models of human language are conscious, meaning they can think, feel, and experience.

The question of what it would mean for an AI model to be aware is actually quite complicated (see for example the take of our colleague), and our aim in this article is not to settle it. But if language researcherscan we use our work in cognitive science and linguistics to explain why it is all too easy for people to fall into the cognitive trap of assuming that an entity can use that language fluently, is conscious, conscious, or intelligent .

Using AI to generate human-like language

Text generated by models like Google’s LaMDA can be difficult to distinguish from text written by humans. This impressive achievement is the result of a decades-long program of building models that generate grammatical, meaningful language.

Eliza 1966

The first computer system to bring people into dialogue was the Eliza psychotherapy software, which was built more than half a century ago. Credit: Rosenfeld Media/Flickr, CC BY

Early versions dating back to at least the 1950s, known as n-gram models, simply counted the occurrence of specific phrases and used them to guess which words were likely to occur in particular contexts. For example, it’s easy to know that “peanut butter and jelly” is a more likely expression than “peanut butter and pineapple.” If you have enough English text, you’ll see the phrase “peanut butter and jelly” over and over, but maybe never the phrase “peanut butter and pineapple”.

Current models, datasets, and rules approaching human language differ from these early efforts in a number of important ways. First, they are trained on pretty much the entire internet. Second, they can learn relationships between words that are far apart, not just words that are neighbors. Third, they’re tuned by a myriad of internal “knobs”—so many that it’s hard for even the engineers who design them to understand why they’re generating one string of words instead of another.

However, the task of the models remains the same as it was in the 1950s: to determine which word is likely to come. Today, they are so good at this task that almost all the sentences they generate seem fluent and grammatical.

Peanut butter and pineapple?

We asked for a large language model, GPT-3, to complete the phrase “Peanut butter and pineapple___”. It said, “Peanut butter and pineapple are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.” If someone said this, you might conclude that they tried peanut butter and pineapple together, formed an opinion, and shared it with the reader.

But how did GPT-3 get to this paragraph? By generating a word that fits the context we have given. And then another. And then another. The model has never seen, touched or tasted pineapple – it just processed all the texts on the internet that they mention. And yet, reading this paragraph can lead the human mind—even that of a Google engineer—to imagine GPT-3 as an intelligent being that can reason about peanut butter and pineapple dishes.

Large AI language models can have a fluent conversation. However, they don’t have a general message to communicate, so their sentences often follow common literary tropes extracted from the texts in which they are trained. For example, if the model is asked about the topic ‘the nature of love’, the model can generate sentences about the belief that love conquers all. The human brain stimulates the viewer to interpret these words as the model’s opinion on the subject, but they are just a plausible set of words.

The human brain is programmed to infer intentions behind words. Every time you start a conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings, and beliefs.

The process of jumping from words to the mental model is seamless and is activated every time you receive a full sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions.

However, in the case of AI systems, it fails – building a mental model from scratch.

A little more research can reveal the seriousness of this misfire. Consider the following question: “Peanut butter and feathers taste great together because___”. GPT-3 continues: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also soft and creamy, which helps offset the texture of the feather.”

The text in this case is just as fluid as our pineapple example, but this time the model says something decidedly less meaningful. You’re beginning to suspect that GPT-3 has never tried peanut butter and feathers.

Attributing intelligence to machines, denying it to humans

A sad irony is that the same cognitive bias that causes people to attribute humanity to GPT-3 can cause them to treat real people in inhumane ways. Socio-cultural linguistics – the study of language in its social and cultural context – shows that assuming too close a link between fluency in expression and fluency in thinking can lead to bias towards differently speaking people.

This is how people with a foreign accent are often seen as less intelligent and are less likely to get the jobs they are qualified for. Similar prejudices exist against speakers of dialects which are not considered prestigious, like Southern English in the US, against deaf people using sign languagesand against people with speech impediments like stuttering

These prejudices are very damaging, often lead to racist and sexist assumptions, and time and again prove to be unfounded.

Fluent language alone does not imply humanity

Will AI ever become aware? This question requires deep consideration, and indeed philosophers have: thought the for decades† What researchers have found, however, is that you can’t just trust a language model if it tells you what it feels like. Words can be misleading, and it’s all too easy to confuse fluent speech with fluent thinking.

Authors:

  • Kyle Mahowald, assistant professor of linguistics, University of Texas at Austin College of Liberal Arts
  • Anna A. Ivanova, PhD candidate in brain and cognitive sciences, Massachusetts Institute of Technology ([{” attribute=””>MIT)

Contributors:

  • Evelina Fedorenko, Associate Professor of Neuroscience, Massachusetts Institute of Technology (MIT)
  • Idan Asher Blank, Assistant Professor of Psychology and Linguistics, UCLA Luskin School of Public Affairs
  • Joshua B. Tenenbaum, Professor of Computational Cognitive Science, Massachusetts Institute of Technology (MIT)
  • Nancy Kanwisher, Professor of Cognitive Neuroscience, Massachusetts Institute of Technology (MIT)

This article was first published in The Conversation.The Conversation

Leave a Comment

Your email address will not be published.