One day AI will seem as human as everyone else. What then?

Shortly after I When I heard about Eliza, the program that questions people like a Rogerian psychoanalyst, I learned that I could run it in my favorite text editor, Emacs. Eliza is a really simple program, with hardcoded text and flow control, pattern matching and easy, learning templates for psychoanalytic triggers – like how recently you called your mom. But even though I knew how it worked, I felt a presence. However, I broke that eerie feeling forever, when it occurred to me to just keep pressing return. The program cycled through four possible opening prompts and the engagement was broken as an actor in a movie made eye contact through the fourth wall.

For many last week, their involvement with Google’s LaMDA – and its supposed feeling—was broken by a Economist article by AI legend Douglas Hofstadter in which he and his friend David Bender demonstrate how “staggeringly hollow” the same technology sounds when asked a nonsensical question like “How many bits of sound are there in a typical cumulonimbus cloud?”

But I doubt we’ll have these obvious stories of inhumanity forever.

From now on, the safe use of artificial intelligence requires demystifying the human condition. If we can’t recognize and understand how AI works, if even expert engineers can fool themselves by detecting desks in a “stochastic parrot-then we have no means to protect ourselves from negligent or malicious products.

This is about ending the Darwinian revolution and more. Understand what it means to be animals, and extend that cognitive revolution to also understand how algorithmic we are. We will all have to overcome the hurdle of thinking that a certain human ability – creativity, agility, empathy, whatever – is going to set us apart from AI. Help us accept who we really are, how? we work, without U.S losing touch with our lives is a hugely expansive project for humanity and for the humanities.

Achieving this understanding without substantial numbers of us embracing polarizing, superstitious or machine-inclusive identities that endanger our societies is a concern not only for the humanities, but also for the social sciences, and for some political leaders. For other political leaders, unfortunately, it may be an opportunity. One path to power can be to encourage and prey on such insecurities and misconceptions, just as some are currently using disinformation to disrupt democracies and regulation. The tech industry, in particular, needs to prove that it is on the side of the transparency and understanding that underpins liberal democracy, not secrecy and autocratic control.

There are two things AI really isn’t, as much as I admire them people who claim otherwise: it is not a mirror and it is not a parrot. Unlike a mirror, it doesn’t just passively reflect the surface of who we are. With the help of AI, we can generate new ideas, photos, stories, sayings, music – and anyone who detects these growing capacities is rightly emotionally triggered. In other people, such creativity is of immense value, not only in recognizing social proximity and social investment, but also in deciding who you possess high-quality genes with which you would like to combine your own genes.

AI is not a parrot either. Parrots perceive many of the same colors and sounds that we do, the way we do, with much the same hardware, and therefore experience much the same phenomenology. Parrots are very social. They imitate each other, probably to prove the bond with the group and the mutual affection, just like us. This is very, very little like what Google or Amazon do when their devices “pape” your culture and desires for you. But at least those organizations have animals (people) in them, and care about things like time. Talking to parrots is definitely not the same as what an AI device is doing at those same moments, shifting some digital bits in a way that is known to likely sell human products.

But does all this mean that AI cannot be conscious? What is this “feeling” that some claim to detect? The Oxford English Dictionary says it’s “having a perspective or a feeling.” I’ve heard philosophers say it’s “having a perspective.” Surveillance cameras have perspectives. Machines can “feel” (feel) anything we build sensors for – touch taste sound light time gravity – but displaying these things as large integers derived from electrical signals means that each machine “feeling” is much more different from ours than even bumblebee vision or bat sonar.

Leave a Comment

Your email address will not be published. Required fields are marked *