Sensitive AI? Convincing you it’s human is only part of LaMDA’s job

As any great illusionist will tell you, the whole point of a staged illusion is to look utterly convincing, to make everything that happens on stage seem so real that the average audience member would have no way of figuring out how. the illusion works.

If this were not the case, it would not be an illusion and the illusionist would essentially be out of a job. In this analogy, Google is the illusionist and its LaMDA chatbot – that made headlines a few weeks ago after a top engineer claimed that the conversational AI had reached consciousness – is the illusion. That is, despite the wave of excitement and speculation on social media and in the media in general, and despite the engineer’s claimsLaMDA is not aware.

How can AI sense be proven?

This is, of course, the million dollar question – to which there is currently no answer.

LaMDA is a language model-based chat agent designed to generate fluent sentences and conversations that look and sound completely natural. Its fluidity is in stark contrast to the clunky and clunky AI chatbots of the past that often resulted in frustrating or unintentionally funny ‘conversations’, and perhaps it was this contrast that understandably impressed people so much.

U.S normality bias tells us that only other sentient people are capable of being so ‘articulate’. So when you witness this level of articulateness from an AI, it’s natural to feel that it must definitely be conscious.

For an AI to be truly conscious, it must be able to think, perceive and feel rather than just using language in a very natural way. However, scientists are divided on whether it is even feasible for an AI system to achieve these properties.

There are scientists like Ray Kurzweil who believe that a human body is made up of several thousand programs, and if we can figure out all those programs, we can build a conscious AI system.

But others disagree on the grounds that 1) human intelligence and functionality cannot be assigned to a finite number of algorithms, and 2) even if a system replicates all that functionality in some form, it cannot be can really be seen consciously, because consciousness is not something that can be created artificially.

Aside from these divisions among scientists, there are as yet no accepted standards to prove the alleged consciousness of an AI system. The famous Turing testcurrently getting a lot of mentions on social media is only meant to measure a machine’s ability to exhibit apparently intelligent behavior similar to, or indistinguishable from, a human.

It is incapable of telling us anything about a machine’s level of consciousness (or lack thereof). Therefore, while it is clear that LaMDA passed the Turing test with flying colors, this in itself does not prove the presence of a self-conscious awareness. It just proves that it can create the illusion of having a self-conscious consciousness, which is exactly what it was designed for.

When, if ever, will AI become aware?

At the moment we have several applications that demonstrate Artificial Narrow Intelligence. ANI is a type of AI designed to do a single task very well. Examples include facial recognition software, disease mapping tools, content recommendation filters, and chess software.

LaMDA falls under the category of Artificial General Intelligence, or AGI – also known as ‘deep AI’. That is, AI designed to mimic human intelligence and can apply that intelligence to a variety of different tasks.

For an AI to feel, it has to go beyond this kind of task intelligence and demonstrate perception, feelings, and even free will. However, depending on how we define these concepts, we may never have sentient AI.

Even in the best case scenario, it would take at least another five to ten years, assuming we could define the aforementioned concepts of consciousness and free will in a universally standardized, objectively characterized way.

One AI to rule them all…or not

The LaMDA story reminds me of the time when filmmaker Peter Jackson’s production team created an AI, aptly named Massive, to compose the epic battle scenes in the Lord of the Rings trilogy.

Massive’s job was to vividly simulate thousands of individual CGI soldiers on the battlefield, each as an independent unit rather than simply mimicking the same moves. In the second movie, The Two Towers, there is a battle scene where the bad guys of the movie bring out a unit of giant mammoths to attack the good guys.

As the story goes, the CGI soldiers playing the good guys, while the team first tried out this sequence, at the sight of the mammoths ran away in the other direction instead of fighting the enemy. Rumors quickly spread that this was an intelligent response, with the CGI soldiers “deciding” they couldn’t win this fight and opting to run for their lives instead.

In reality, the soldiers ran the other way because of: shortage of data, not because of some kind of feeling they had suddenly got. The team made some adjustments and the issue has been resolved. The apparent demonstration of “intelligence” was a bug, not a feature. But in situations like this, it’s tempting and exciting to assume feeling. After all, we all love a good magic show.

Be careful what we wish for

Finally, I think we really need to ask ourselves if we want AI systems to be aware. We’ve been so absorbed in the hype about AI sense that we haven’t questioned enough whether this is a goal we should be striving for.

I’m not talking about the danger of a sentient AI turning against us, as so many dystopian science fiction movies like to imagine. We just need to have a clear idea of ​​why we want to achieve something in order to align technological progress with societal needs.

What good is AI feeling, other than being “cool” or “exciting”? Why should we do this? Who would it help? Even some of our best intentions with this technology have shown dangerous side effects, such as language model-based AI systems in medical Q&A advise someone to commit suicidewithout putting proper barriers around it.

Whether in healthcare or self-driving cars, we are way behind technology when it comes to understanding, implementing and using AI accountability with societal, legal and ethical considerations.

Until we have enough discussions and resolutions along these lines, I fear that hype and misconceptions about AI will continue to dominate the popular imagination. We may be entertained by the Wizard of Oz’s theatrics, but given the potential problems that can arise from these misconceptions, it’s time to lift the curtain and reveal the less fantastic truth behind it.

dr. Chirag Shah is an associate professor at the Information school at the University of Washington.

Leave a Comment

Your email address will not be published. Required fields are marked *