Google uses artificial intelligence to better parse complex searches, enabling it to deliver relevant ads and web results.
After years of companies highlighting the potential of artificial intelligence, researchers say now is the time to adjust expectations.
With recent technological leaps, companies have developed more systems that can produce ostensibly human conversations, poetry and images. Still, AI ethicists and researchers warn that some companies are exaggerating its capabilities — hype they say is causing widespread misunderstanding and distorting policymakers’ perceptions of the power and fallibility of such technology.
“We’re out of balance,” said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, a nonprofit in Seattle.
He and other researchers say imbalances help explain why many were misled earlier this month when an engineer at Google at Alphabet Inc., based on his religious beliefs, argued that one of the company’s artificial intelligence systems should be considered sentient.
The engineer said that the chatbot had in effect become a person with the right to request permission for the experiments being performed on it. Google suspended him and dismissed his claim, saying business ethicists and technologists have explored and rejected the possibility.
The belief that AI is becoming — or ever might become — conscious remains on the fringe of the wider scientific community, researchers say.
In reality, artificial intelligence encompasses a range of techniques that remain largely useful for a range of non-cinematic back-office logistics, such as processing users’ data to better target them with ads, content, and product recommendations.
Over the past decade, companies such as Google, Facebook parent company Meta Platforms Inc. and Amazon.com Inc. invested heavily in fostering such opportunities to power their engines for growth and profit.
For example, Google uses artificial intelligence to better parse complex searches, enabling it to deliver relevant ads and web results.
A few startups have also sprouted with more grandiose ambitions.
One, called OpenAI, raised billions from donors and investors, including Tesla Inc. CEO Elon Musk and Microsoft Corp. in an effort to achieve so-called artificial general intelligence, a system capable of matching or surpassing every dimension of human intelligence.
Some researchers believe it will be decades in the future, if not unattainable.
The competition between these companies to outdo each other has fueled rapid AI advancements and spawned more and more sizzling demos that captured the public imagination and drew attention to the technology.
OpenAI’s DALL-E, a system that can generate artwork based on user prompts, such as “McDonalds in orbit around Saturn” or “bears in sports gear in a triathlon,” has spawned many memes on social media in recent weeks.
Google has since followed suit with its own text-based art generation systems.
While these results can be spectacular, a growing number of experts are warning that companies aren’t dampening the hype enough.
Margaret Mitchell, who co-led Google’s AI ethics team before the company fired her after writing a critical article about its systems, says part of the search giant’s sale to shareholders is that it’s the best in the world. world is in the field of AI.
Ms. Mitchell, now at an AI startup called Hugging Face, and Timnit Gebru, Google’s other ethical co-lead — also forced to leave — were some of the first to warn of the technology’s dangers.
In their latest paper, written at the company, they argued that the technologies would sometimes cause harm, because their human capabilities mean they have the same potential for failure as humans.
Among the cited examples: A mistranslation by Facebook’s AI system that rendered “good morning” in Arabic as “hurt them” in English and “attack them” in Hebrew, leading Israeli police to arrest Palestinian man who posted the greeting before realizing that wrong.
Internal documents reviewed by The Wall Street Journal as part of The Facebook Files series published last year, it was also found that Facebook’s systems failed to consistently identify first-person shooting videos and racist diatribes, removing only some of the content that violated the rules. of the company.
Facebook said improvements to its AI have been responsible for dramatically reducing the amount of hate speech and other content that violates its rules.
Google said it fired Ms Mitchell for sharing internal documents with people outside the company. The company’s AI head told staffers that Ms. Gebru’s work was not rigorous enough.
The layoffs reverberated through the tech industry, prompting thousands of people inside and outside Google to denounce what they called “unprecedented research censorship” in a petition.
CEO Sundar Pichai said he would work to restore confidence in these issues and pledged to double the number of people studying AI ethics.
The gap between perception and reality is not new.
Mr. Etzioni and others pointed to the marketing around Watson, International Business Machines Corp.’s AI system. that became widely known after beating people on the quiz show Danger†
After a decade and billions of dollars in investment, the company said last year it was investigating the sale of Watson Health, a unit whose marquee should help doctors diagnose and cure cancer.
The stakes have only grown as AI is now embedded everywhere, involving more companies whose software — email, search engines, news feeds, voice assistants — permeates our digital lives.
After its engineer’s recent claims, Google pushed back the idea that its chatbot is sensitive.
The company’s chatbots and other conversation tools “can riff on any great topic,” said Google spokesman Brian Gabriel. “If you ask what it’s like to be an ice dinosaur, they can generate text about melting and roaring and so on.”
That’s not the same as feeling, he added.
Blake Lemoine, the now-suspended engineer, said in an interview that he had compiled hundreds of pages of dialogue from controlled experiments with a chatbot called LaMDA to aid his research, and he accurately portrayed the inner workings of Google’s programs.
“This is not an exaggeration of the nature of the system,” said Mr. lemon. “I try to communicate as carefully and precisely as possible where there is uncertainty and where there is not.”
Mr. Lemoine, who described himself as a mystic who incorporated aspects of Christianity and other spiritual practices such as meditation, has said he is speaking in a religious capacity when he describes LaMDA as conscious.
Elizabeth Kumar, a computer science doctoral student at Brown University who studies AI policy, says the perception gap has crept into policy documents.
Recent local, federal, and international regulations and regulatory proposals have sought to address the potential of AI systems to discriminate, manipulate, or otherwise harm in ways that presume a system is highly competent.
They’ve largely omitted the possibility of damage from simply not working such AI systems, which is more likely, Ms Kumar says.
Mr Etzioni, who is also a member of the Biden administration’s National AI Research Resource Task Force, said policymakers often struggle to understand the issues.
“I can tell you from my conversations with some of them that they are well-intentioned and ask good questions, but they are not super knowledgeable,” he said.