Forget the feeling… the concern is that AI is copying human bias | Kenan Malik

“I want everyone to understand that I am, in fact, a person.” So claimed a Google software program, sparking a bizarre controversy in AI circles over the past week and further

The program is called LaMDA, an acronym for Language Model for Dialogue Applications, a project by Google. The person it declared itself to be was Blake Lemoine, a senior software engineer at Google. He believes that LaMDA is conscious and should be given the same rights and courtesies as any other sentient being. It even has preferred pronouns (it/it is if you need to know). When Google rejected his claims, he published his… conversations with LaMDA (or at least edited highlights of some conversations) on his blog. At that point, Google suspended him for making public trade secrets and the whole affair became an international one cause celeb

Why does Lemoine think LaMDA is sensitive? He does not know. “People keep asking me to substantiate why I think LaMDA is sensitive,” he tweeted. The problem is, “There’s no scientific framework to make those determinations.” So instead, “My opinions about LaMDA’s personality and feel are based on my… religious beliefs

Lemoine is entitled to his religious beliefs. But religious belief does not change what is in reality a highly sophisticated chatbot into a sentient being. Feeling is one of those concepts whose meaning we can grasp intuitively, but which is difficult to formulate in scientific terms. It is often confused with similar ill-defined concepts such as consciousness, self-awareness, self-awareness and intelligence. The Cognitive Scientist Gary Marcus describes feeling as “being aware of oneself in the world”. LaMDA, he adds, “Just no

A computer manipulates symbols. The program specifies a set of rules, or algorithms, to transform one set of symbols into another. But it doesn’t specify what those symbols mean. Meaning is irrelevant to a computer. Nevertheless, a large language model like LaMDA, trained on the extraordinary amount of text that is online, can become adept at recognizing patterns and responses that are meaningful to people. In one of Lemoine’s conversations with LaMDA, he asked, “What kinds of things make you feel pleasure or joy?” To which it responded: “Spending time with friends and family in happy and uplifting company.”

It’s a reaction that makes perfect sense to a human being. We find joy in “spending time with friends and family”. But in what sense has LaMDA ever spent “time with family”? It’s programmed well enough to recognize that this would make a meaningful sentence for people and an eloquent answer to the question posed without ever being meaningful to itself.

People manipulate in thinking and talking and also reading and writing symbols. However, for humans meaning is everything, unlike computers. When we communicate, we communicate meaning. It’s not just about the outside of a set of symbols, but also the inside, not just the syntax but also the semantics. Significance to humans comes from our existence as social beings. I only make sense of myself as far as I live in, and relate to, a community of other thinking, feeling, talking beings. The translation of the mechanical brain processes that underlie thoughts into what we call meaning requires a social world and an agreed upon convention to understand that experience.

Meaning comes not only through a process of calculation, but also of social interaction, interaction that forms the content – the inside, if you will – of the symbols in our heads. Social conventions, social relationships and social memory are the rules that assign meaning. It is precisely the social context that stumbles the most adept machines. Researchers at the Allen Institute for AIs Mosaic project asked language models similar to LaMDA questions that required a bit of social intelligence; for example, “Jordan wanted to tell Tracy a secret, so Jordan leaned over to Tracy. Why did Jordan do this?” On such questions machines went much worse than people

The debate about whether computers are conscious tells us more about humans than machines. Humans are so desperate for meaning that we often ascribe the spirit to things, as if they enjoyed agency and intention. The attribution of feeling to computer programs is the modern version of the ancients who saw wind, sea and sun as possessed of spirit, spirit and divinity.

There are many issues related to AI that we should be concerned about. None of them have to do with feeling. For example, there is the problem of bias. Because algorithms and other forms of software are trained using data from human societies, they often replicate the prejudices and attitudes of those societies. Facial recognition software shows racial bias and people have been arrested on wrong data† AI used in healthcare or recruitment can replicate real-life social biases.

Timnit Gebru, former head of Google’s ethical AI team, and several of her colleagues wrote a paper in 2020 showing that large language models, such as LaMDA, trained in virtually as much online text as they can absorb, can be special. prone to a deeply distorted worldview because so much input material is racist, sexist and conspiratorial. Google refused to publish the newspaper and she was… forced out of the company

Then there is the matter of privacy. From the increasing use of facial recognition software to predictive policing techniquesFrom algorithms that track us online to “smart” systems at home, such as Siri, Alexa and Google Nest, AI is permeating our deepest lives. The Florida Police Department has been ordered to download recordings of private conversations from Amazon Echo devices. We stumble upon a digital panopticon.

We don’t need permission from LaMDA to “experiment” with it, as Lemoine apparently claimed. But we must push for greater transparency from tech companies and state institutions about how they use AI for surveillance and control. The ethical problems posed by AI are both much smaller and much larger than the imagination of a sentient machine.

Kenan Malik is a columnist for Observer

Leave a Comment

Your email address will not be published. Required fields are marked *