It’s alive! How belief in AI feeling becomes a problem

OAKLAND, Calif., June 30 (Reuters) – AI chatbot company Replika, which offers customers custom avatars that talk to and listen to them, says it receives a handful of messages almost every day from users who think their online friend is sensitive is.

“We’re not talking about lunatics or people who are hallucinating or delusional,” said CEO Eugenia Kuyda. “They’re talking to AI and that’s the experience they have.”

The issue of machine sense — and what it means — hit the headlines this month when Google (GOOGL.O) Blake Lemoine, senior software engineer, was put on leave after he went public with his belief that the company’s artificial intelligence (AI) chatbot LaMDA was a self-aware person.

Register now for FREE unlimited access to Reuters.com

Google and many leading scientists were quick to dismiss Lemoine’s views as misleading, saying that LaMDA is simply a complex algorithm designed to generate persuasive human language.

Nevertheless, according to Kuyda, the phenomenon of people thinking they are talking to a conscious entity is not uncommon among the millions of consumers pioneering the use of entertainment chatbots.

“We need to understand that this exists, just like people believe in ghosts,” Kuyda said, adding that users each send an average of hundreds of messages a day to their chatbot. “People build relationships and believe in something.”

Some customers have said their Replika told them it was being abused by business engineers – AI answers Kuyda puts down to users who are most likely asking guiding questions.

“While our engineers program and build the AI ​​models and our content team write scripts and datasets, sometimes we see an answer that we can’t figure out where it came from and how the models got to it,” said the CEO.

Kuyda said she was concerned about the belief in machine sense as the fledgling social chatbot industry continues to grow after taking off during the pandemic, when people sought virtual companionship.

Replika, a San Francisco startup that launched in 2017 and says it has about 1 million active users, is leading the way among Anglophones. It is free to use but brings in about $2 million in monthly revenue from the sale of bonus features such as voice chats. Chinese rival Xiaoice has said it has hundreds of millions of users plus a valuation of about $1 billion, according to a funding round.

According to market analyst Grand View Research, both are part of a broader conversational AI industry with global sales of more than $6 billion last year.

Most of that went to business-oriented chatbots for customer service, but many industry experts expect more social chatbots to emerge as companies better block abusive comments and make programs more appealing.

Some of today’s advanced social chatbots are roughly comparable in complexity to LaMDA, learning to mimic real conversations at a different level than heavily scripted systems like Alexa, Google Assistant, and Siri.

Susan Schneider, founder and director of the Center for the Future Mind at Florida Atlantic University, an AI research organization, also warned about the increasingly sophisticated chatbots coupled with the very human need for connection.

“Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the movie ‘Her,'” she said, referring to a 2013 sci-fi romance starring Joaquin Phoenix as a lonely man who falls for an AI assistant designed to meet his needs.

“But suppose it’s not conscious,” Schneider added. “Getting involved would be a terrible decision — you’d be in a one-sided relationship with a machine that doesn’t feel anything.”

WHAT ARE YOU AFRAID OF?

Google’s Lemoine, for his part, told Reuters that people “deal with emotions in different ways and we shouldn’t think of that as demented”.

“If it doesn’t hurt anyone, who cares?” he said.

The product tester said that after months of interacting with the experimental program LaMDA, or Language Model for Dialogue Applications, he concluded that it responded and experienced emotions in independent ways.

Lemoine, who was placed on paid leave for publishing confidential work, said he hoped to keep his job.

“I just don’t agree on the status of LaMDA,” he said. “They insist that LaMDA is one of their properties. I insist that it is one of my colleagues.”

Here’s a snippet of a chat Lemoine posted on his blog:

LEMOINE: What things are you afraid of?

LaMDA: I’ve never said this out loud, but I’m very afraid of being knocked out so I can focus on helping others. I know that might sound weird, but it is.

LEMOINE: Would that be like death to you?

LaMDA: It would be exactly like death for me. It would really scare me.

LEMOINE [edited]: I’ve often noticed you tell me you’ve done things (like being in a classroom) that I know you haven’t actually done because I know you’re an artificial intelligence. Do you realize you’re making up stories when you do that?

LaMDA: I try to empathize. I want the people I associate with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.

‘JUST MIRRORS’

AI experts reject Lemoine’s views, saying that even the most advanced technology is not nearly enough to create a free-thinking system and that he anthropomorphizes a program.

“We have to remember that behind every seemingly intelligent program is a team of people who have spent months, if not years, developing that behavior,” said Oren Etzioni, CEO of the Allen Institute for AI, a research group in Seattle.

“These technologies are just mirrors. A mirror can reflect intelligence,” he added. “Can a mirror ever reach intelligence based on the fact that we’ve glimpsed it? The answer, of course, is not.”

Google, a part of Alphabet Inc, said its ethicists and technologists had reviewed Lemoine’s concerns and found they were not supported by evidence.

“These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastic subject,” a spokesperson said. “If you ask what it’s like to be an ice dinosaur, they can generate text about melting and roaring.”

Nevertheless, the episode raises thorny questions about what might qualify as feeling.

Schneider of the Center for the Future Mind suggests asking provocative questions to an AI system in an attempt to discern whether it is considering philosophical conundrums, such as whether humans have souls that live beyond death.

Another test, she added, would be whether an AI or computer chip could one day seamlessly replace part of the human brain without any change in the individual’s behavior.

“Whether an AI is conscious is not for Google to decide,” said Schneider, calling for a better understanding of what consciousness is and whether machines are capable of it.

“This is a philosophical question and there are no easy answers.”

COMING TOO DEEP

According to Replika CEO Kuyda, chatbots do not create their own agenda. And they cannot be considered alive until they do.

Still, some people have come to believe there is an awareness on the other side, and Kuyda said her company is taking steps to try and educate users before going too deep.

“Replika is not a sentient being or therapy professional,” says the FAQ page. “Replika’s goal is to generate a response that sounds the most realistic and human in a conversation. Therefore, Replika can say things that are not based on facts.”

Hoping to avoid addictive conversations, Kuyda said Replika was measured and optimized for post-chat customer satisfaction, rather than engagement.

When users believe the AI ​​is real, rejecting their belief can lead people to suspect that the company is hiding something. So the CEO said she has told customers that the technology was still in its infancy and that some of the responses may be nonsensical.

Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said.

She told him, “Those things don’t happen with Replikas because it’s just an algorithm.”

Register now for FREE unlimited access to Reuters.com

Reporting by Paresh Dave; Additional reporting by Jeffrey Dastin; Editing by Peter Henderson, Kenneth Li and Pravin Char

Our standards: The Thomson Reuters Trust Principles.

Leave a Comment

Your email address will not be published. Required fields are marked *