Why We Talk About Computers With Brains (And Why The Metaphor Is All Wrong)

It is a widely recognized truth that the machines are taking over. What is less clear is whether the machines know that. Recent Claims by a Google Engineer That the LaMBDA AI Chatbot May Have Been Made Conscious international headlines and sent philosophers in a tizz† Neuroscientists and linguists were: less enthusiastic

As AI gains more, the technology debate shifts from the hypothetical to the concrete, and from the future to the present. This means that a broader cross-section of people – not just philosophers, linguists and computer scientists, but also policymakers, politicians, judges, lawyers and jurists – must form a more sophisticated view of AI.

After all, the way policymakers talk about AI already determines decisions about how to regulate that technology.

Take for example the case of Thaler v Commissioner of Patents, which was launched in Australia’s Federal Court after the patent commissioner rejected an application naming an AI as an inventor. When Judge Beech disagreed and allowed the application, he made two findings.

First, he discovered that the word “inventor” simply described a function and could be performed by a human being or a thing. Think of the word “dishwasher”: it can describe a person, a kitchen appliance, or even an eager dog.

Nor does the word “dishwasher” necessarily imply that the agent is good at his job…

Second, Justice Beech used the brain metaphor to explain what AI is and how it works. He reasoned by analogy with human neurons and found that the AI ​​system in question could be considered autonomous and thus could meet the demands of an inventor.

The case raises an important question: where does the idea that AI is like a brain come from? And why is it so popular?

AI for Mathematically Challenged

Understandably, those without technical training may rely on metaphors to understand complex technology. But we hope policymakers develop a slightly more sophisticated understanding of AI than we get from Robocop.

My research was about how legal scientists talk about AI. A major challenge for this group is that they are often math phobic. If the lawyer Richard Posner argues, the law

offers a haven for smart youngsters with a ‘math block’, although this usually means they shy away from math and science courses because they could get higher grades with less verbal work.

Following Posner’s insight, I reviewed all applications of the term “neural network” – the usual label for a common type of AI system – published in a series of Australian law journals between 2015 and 2021.

Most papers made an effort to explain what a neural network was. But only three of the nearly 50 papers attempted to deal with the underlying math, in addition to a broad reference to statistics. Only two papers used visual aids to aid in their explanations, and none made use of the computer code or mathematical formulas central to neural networks.

In contrast, two-thirds of the explanations referred to the “mind” or biological neurons. And the vast majority of those who have a straight away analogy. That is, they suggested that AI systems replicated the function of human minds or brains. The metaphor of the mind is clearly more attractive than the underlying mathematics.

It’s no wonder, then, that our policymakers and judges – as well as the general public – make so much use of these metaphors. But the metaphors lead them astray.

Where did the idea that AI is like the brain come from?

Understanding what produces intelligence is an old philosophical problem that was eventually picked up by the science of psychology. An influential explanation of the problem was made in William James’ 1890 book Principles of Psychologywho gave early scientific psychologists the task of identifying a one-to-one correlation between a mental state and a physiological state in the brain.

In the 1920s, neurophysiologist Warren McCulloch attempted to solve this “mind/body problem” by proposing a “psychological theory of mental atoms.” In the 1940s, he joined Nicholas Rashevsky’s influential biophysics group, which sought to apply the mathematical techniques used in physics to the problems of nueroscience.

Key to these efforts were attempts to build simplified models of how biological neurons might work, which could then be refined into more sophisticated, mathematically rigorous explanations.



Read more:
We’re told that AI neural networks “learn” the way humans do. A neuroscientist explains why that isn’t the case


If you have vague memories of your high school physics teacher trying to explain the motion of particles by analogy with billiard balls or long metal slinkies, then you get the general picture. Start with some very simple assumptions, understand the basic relationships, and work out the complexities later. In other words, assume a spherical cow

In 1943, McCulloch and logician Walter Pitts proposed a simple model of neurons intended to explain the “heat illusion.” phenomenon† While it was ultimately an unsuccessful view of how neurons work — McCulloch and Pitts later abandoned it — it was a very useful tool for designing logic circuits. Early computer scientists adapted their work to what is now known as logic design, where the naming conventions — “neural networks” for example — exist to this day.

That computer scientists still use terms like these seems to have fueled the popular misconception that there is an intrinsic link between certain types of computer programs and the human brain. It’s as if the simplified assumption of a spherical cow turned out to be a useful way of describing how to design ball pits and led us all to believe that there is a necessary connection between children’s play equipment and dairy farming.

This would be little more than a curiosity of intellectual history, were it not for the fact that these misconceptions shape our policy responses to AI.

Is the solution to force lawyers, judges and policymakers to pass the high school bill before they start talking about AI? They would certainly object to such a proposal. But in the absence of better mathematical literacy, we must use better analogies.

While the entire federal court has since overturned Justice Beech’s decision, thaler, he specifically pointed to the need for policy development in this area. Without giving non-specialists better ways to understand and talk about AI, we will likely continue to face the same challenges.

Leave a Comment

Your email address will not be published.