This is what it takes for AI to be creative

Photo credit: Micha L. Rieser, via Wikimedia Commons.

Editor’s Note: We are pleased to present an excerpt from Chapter 2 of the new book Uncalculable you: what you do that artificial intelligence will never doby computer engineer Robert J. Marks, director of the Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.

Selmer Bringsjord and his colleagues have the Lovelace test as a replacement for the defective Turing test. The test is named after Lady Ada Lovelace (1815-1852).

Bringsjord defined software creativity as passing the Lovelace test when the program does something the programmer can’t explain that or an expert in computer code† Computer programs can produce unexpected and surprising results. Results of computer programs are often not provided. But the question is: does the computer create a result that the programmer cannot explain afterwards?

When it comes to assessing creativity (and thus consciousness and humanity), the Lovelace test is a much better test than the Turing test. If AI really produces something surprising that cannot be explained by the programmers, then the Lovelace test has passed and we may be looking at creativity. So far, however, no AI has passed the Lovelace test. There have been many instances where a machine looked like it was being creative, but on closer inspection the appearance of creative content faded.

Here are a few examples.

AlphaGo

A computer program called AlphaGo learned to play GO, the most difficult of all popular board games. AlphaGo was an impressive monumental contribution to machine intelligence. AI had already mastered tic-tac-toe, then the more complicated drafts game and then the even more complicated chess game. Conquest of GO remained an unfulfilled goal of AI until it was finally achieved by AlphaGo.

In a match against (human) world champion Lee Sedol in 2016, AlphaGo made a surprising move. Those who understood the game described the move as ingenious and unlike anything a human would ever do.

Did we see the human quality of creativity in AlphaGo beyond the intention of the programmers? Does this act pass the Lovelace test?

AlphaGo’s programmers claim that they did not foresee the unconventional move. This is probably true. But AlphaGo has been trained by the programmers to play GO. GO is a fixed rules board game in a static, never-changing arena. And that’s what the AI ​​did, and did well. It applied programmed rules within a narrow, rule-bound game. AlphaGo is trained to play GO and that’s what it did.

So no. The Lovelace test was not passed. If the AlphaGo AI were to perform a task that was not programmed, such as beating all newcomers to the simple game parcheesia (pictured above), the Lovelace test would pass. But as it stands, AlphaGo is not creative. It can only perform the task for which it was trained, which is to play GO. When asked, AlphaGo cannot even explain GO’s rules.

That said, AI can seem smart if it produces a surprising result. But surprise does not equal creativity. When a computer program is asked to search through a billion designs to find the best one, the result can be a surprise. But that’s not creativity. The computer program has done exactly what it was programmed to do.

The Sacrifice Tweeb

Here’s another example from my personal experience. The Office of Naval Research contracted Ben Thompson, of the Penn State Applied Research Lab, and me and asked us to develop swarming behaviors. Simple swarm rules can result in unexpected swarming behavior such as cones stacking. Given simple rules, finding the associated emergent behavior is easy. Just run a simulation. But the inverse design problem is more difficult. If you want a swarm to perform a task, what simple rules should the swarm bugs follow? To solve this problem, we applied an evolutionary computer AI. This process eventually led to thousands of possible rules to find the set that provided the best solution for the desired performance.

One problem we looked at involved a predatory prey swarm. All the action took place in an enclosed square virtual space. Predators, called bullies, ran around chasing prey called dweebs. Bullies caught dweebs and killed them. We wondered what the performance would be if the goal was to maximize the survival time of the dweeb swarm. The survival time of the swarm was measured until the last dweeb was killed.

After doing the evolutionary quest, we were surprised by the result: the dweebs subjected themselves to self-sacrifice to maximize the overall lifespan of the swarm.

Here’s what we saw: A single dweeb caught the attention of all the bullies, who chased the dweeb in circles around the room. They went round and round, adding seconds to the total life of the swarm. During the chase, all the other dweebs huddled in the corner of the room, trembling with what appeared to be fear. In the end, the pursuing bullies killed the sacrificing dweeb, and a pandemonium erupted as the surviving dweebs dispersed in fear. Finally, another sacrificial twit was identified, and the process repeated. The new sacrificial dweeb kept the bullies running in circles while the remaining dweebs huddled in a corner.

The sacrificial dweeb result was unexpected, a complete surprise. Nothing was written in the evolutionary computer code that explicitly called for these sacrificial dweebs. Is this an example of AI doing something we hadn’t programmed it to do? Did it pass the Lovelace test?

Absolutely not

We programmed the computer to search through millions of strategies that would maximize the life of the dweeb swarm, and that’s what the computer did. It evaluated options and chose the best. The result was a surprise, but it failed the Lovelace test for creativity. The program did exactly what it was written for. And the apparently frightened dweebs didn’t actually tremble with fear; people tend to project human emotions onto non-conscious things. They quickly adapted to stay as far away from the nearest bully as possible. They are programmed to do this.

If the sacrificial action and the unexpected GO action against Lee Sedol fail the Lovelace test, then what will? The answer is, anything outside of what the code was programmed to do.

Here is an example of the predator prey swarm example. The Lovelace test would be passed if some dweebs got aggressive and started attacking and killing lone bullies — a possible action we hadn’t programmed into the array of possible strategies. But it didn’t, and because a dweeb’s ability to kill a bully isn’t written in code, it never will.

Likewise, without additional programming, AlphaGo will never involve opponent Lee Sedol in trash talk or psychoanalyze Sedol to get a leg up on the game. Any of those things would be creative enough to pass the Lovelace test. But remember: the AlphaGo software as written couldn’t even explain its own programmed behavior, the game of GO.

Leave a Comment

Your email address will not be published. Required fields are marked *