Google’s ‘conscious’ AI can’t count on a minyan, but it still raises ethical dilemmas | Opinion

When a Google engineer told an interviewer that an artificial intelligence (AI) technology developed by the company had become “sensitive,” it sparked a passionate debate about what it would mean for a machine to have human self-awareness.

Why the red tape? In part, the story fuels current fears that AI itself will somehow threaten humanity, and that “thinking” machines will develop their own will.

But there is also the deep concern that if a machine is conscious, it is no longer an inanimate object with no moral status or “rights” (we owe nothing to a rock, for example), but rather a living being with the status of a ” moral patient” to whom we owe esteem.

I am a rabbi and engineer currently writing my dissertation on the “moral status of AI” at Bar Ilan University. In Jewish terms, when machines become conscious, they become the object of the command “tzar baalei hayim” – which demands that we do not harm living beings. Philosopher Jeremy Bentham similarly stated that entities become moral subjects when we answer the question, “Can they suffer?” affirmative.

This is what makes the Google engineer’s claim alarming, as he has shifted the state of the computer he had a conversation with from an object to a subject. That is, the computer (known as LaMDA) can no longer be seen as a machine, but as a being that “can suffer” and thus a being with moral rights.

‘Sentience’ is an enigmatic label used in philosophy and AI circles and refers to the ability to feel, to experience. It is a general term referring to a particular level of consciousness believed to exist in biological beings on a spectrum – from a relatively basic sensibility in simple creatures (e.g. earthworms) to more robust experience in so-called “higher” organisms (e.g. . dolphins, chimpanzees).

Ultimately, however, there is a qualitative leap towards people with a second-order consciousness, what religious people call “soul,” and which gives us the ability to think about our experiences – not just to experience them.

The question then becomes: what is the basis of this claim of feeling? Here we enter the philosophical quagmire known as ‘other minds’. We humans don’t actually have a really good test to determine if someone is conscious. We assume that our fellow biological beings are conscious because we know we are. That, along with our shared biology and shared behavioral responses to things like pain and pleasure, allows us to assume that we are all conscious.

What about machines then? Many tests have been proposed to determine sensation in machines, the most famous of which is “The Turing Test”, described by Alan Turing, the father of modern computer science, in his seminal 1950 paper, “Computing Machinery and Intelligence”. He proposed that when a human cannot see whether he is talking to another human or a machine, the machine can be said to have attained human-like intelligence – that is, accompanied by consciousness.

From a cursory reading of the Google engineer’s interview with LaMDA, it seems relatively clear that the Turing test passed.

That said, countless machines have passed the Turing test in recent years — so much so that most, if not all, researchers today don’t believe that passing the Turing test is anything more than advanced language processing, not consciousness. Moreover, after dozens of variations of the test were developed to determine consciousness, philosopher Selmer Bringsjord stated, “Only God would know a priori, because his test would be direct and non-empirical.”

Aside from the current media frenzy over LaMDA, how should we approach this issue of sentient AI? That is, given that engineering teams around the world have been working on “machine awareness” since the mid-1990s, what should we do when they achieve this? Or, more urgently, should they be allowed to achieve that at all? Indeed, ethicists argue that this question is more persistent than the question of allowing animal cloning.

From a Jewish perspective, I believe a compelling answer to this moral dilemma can be gleaned from the following Talmudic vignette (Sanhedrin 65b), in which a rabbi appears to have created a sentient humanoid or “gavra”:

Rava said: If the righteous willed, they could create a world, for it is written: “But your iniquities have distinguished between you and God.” Rava created a humanoid (gavra) and sent it to R. Zeira. R. Zeira spoke to him but got no answer. on it [R. Zeira] said to him, “You are a creature of my friend: return to your dust.”

For R. Zeira, as with Turing, the power of the soul (ie, second-order consciousness) is expressed in a being’s ability to articulate itself. R. Zeira, unlike those applying Turing’s test today, could detect a lack of soul in Rava’s gavra.

Despite R. Zeira’s rejection of the creature, some in this story read permission to create sentient beings – after all, Rava was a learned and holy sage, and would not have broken Jewish law by creating his gavra.

But in context, the story displays, at best, a deep ambivalence about people wanting to play God. Recall that the story begins with Rava declaring, “If the righteous wanted it, they could create a world” – that is, a sufficiently righteous person could create a real human being (aka “an entire world”). Rava’s failed attempt to do so suggests that he was either wrong in his claim, or that he was not just enough.

Some argue that R. Zeira would have been willing to accept a humanoid on a human level. But a mystical midrash, or commentary, denies such a claim. In that midrash, the prophet Jeremiah – an embodiment of justice – succeeds in creating a humanoid on a human level. Yet that same hominid, when he comes to life, rebukes Jeremiah for making him! It is clear that the venture to create sentient hominids is being rejected – a cautionary tendency seen in the extensive literature on golems, the inanimate creatures brought to life by rabbinical magic and who always run amok.

Space does not allow me to delineate all the moral difficulties associated with the artificial creation of living beings. Suffice it to say, the Jewish tradition sided with opinion leaders such as Joanna Bryson, who said, “Robot builders are ethically obligated to create robots to which robot owners have no ethical obligations.”

Or, in the words of R. Zeira, “Return to your dust.” JN

The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of JTA or its parent company, 70 Faces Media.

Leave a Comment

Your email address will not be published.