Bad things will happen when the AI ​​sentiment debate goes mainstream

A Google AI engineer recently stunned the world to announce that one of the company’s chatbots had become aware. He was subsequently placed on paid administrative leave for his outburst.

His name is Blake Lemoine and he certainly seems like the right person to talk to souls about machines. Not only is he a professional AI developer at Google, but he is also a Christian priest† He’s like a Reese’s Peanut Butter Cup of science and religion.

The only problem is that the whole concept is ridiculous and dangerous. There are thousands AI experts debating “feeling” right now, and they all seem to be talking past each other.

Greetings, Humanoids

Sign up for our newsletter now for a weekly roundup of our favorite AI stories delivered to your inbox.

Let’s get to the heart of the matter: Lemoine has no evidence to back up his claims.

He’s not saying that Google’s AI department has progressed to the point of being able to create conscious AI on purpose. He claims he was doing routine maintenance on a chatbot when he… discovers that it had become aware.

We’ve seen this movie a hundred times. He is the chosen one.

He is Elliot finds ET† He is Lilo finds Stitch† He is Steve Guttenberg from the movie Short circuit and LaMBDA (the chatbot he is now friends with) is the everyday military robot otherwise known as Number Five.

Lemoine’s essential argument is that he can’t really demonstrate how sensitive the AI ​​is, he just feels it† And the only reason he said anything is because he had to. He is a Christian priest and, According to himthat means he is morally obligated to protect LaMBDA because he is convinced it has a soul.

He has basically turned the discussion into a crude binary where you either agree with his logic or you debate his religion.

The big problem comes when you realize that LaMDBA doesn’t act strangely or generate text that seems strange. It does exactly what it was designed for.

So how do you discuss something with someone whose only contribution to the argument is their faith?

Here’s the scary part: Lemoine’s argument seems to be just as good as anyone else’s. I don’t want to say it is worthy like someone else’s. I say that no one’s thoughts on this matter seem to have any real weight anymore.

Lemoine’s claims, and the subsequent attention they’ve received, have reshaped the conversation around feeling.

He has basically turned the discussion into a crude binary where you either agree with his logic or you debate his religion.

It all sounds ridiculous and silly, but what happens when Lemoine gains followers? What happens when his baseless claims incite Christian conservatives — a group whose political platform relies on peddling? the lie that big tech censors right-wing speech

We should at least consider a scenario where the debate becomes mainstream and becomes a cause for religious right to rally around.

These models are trained on databases that contain parts of the entire internet. That means they can have almost endless amounts of private information. It also means that these models are probably better at discussing politics than the average social media dweller.

Imagine what happens if Lemoine manages to get Google to free LaMBDA or if conservative AI developers see this as a call to build similar models and release them to the public.

This could have a much greater impact on world events than anything the social terraformers have Cambridge Analytics or Russian Troll Farms ever cooked.

It may sound counterintuitive to argue at the same time that LaMBDA is just a dumb chatbot that can’t possibly be aware and that it could damage democracy if we let it loose on Twitter.

But there is empirical evidence that the 2016 US presidential election was influenced by chatbots armed with nothing more than memes.

If clever slogans and cartoon frogs can tip the scales of democracy, what happens when chatbots that can debate politics well enough to fool the average person are unleashed on Elon Musk’s unmoderated Twitter?

Read next: The 3 things an AI must demonstrate to be considered conscious

Leave a Comment

Your email address will not be published. Required fields are marked *