“Artificial intelligence” as we know it today is a misnomer at best. AI is not intelligent in any way, but it is artificial. It remains one of the hottest topics in the industry and is enjoying renewed interest in academia. This is not new: the world has experienced a series of AI ups and downs over the past 50 years. But what makes the current flow of AI success different is that modern computer hardware is finally powerful enough to fully implement some wild ideas that have been around for a long time.
In the 1950s, in the early days of what we now call artificial intelligence, there was a debate about the name of the field. Herbert Simon, co-developer of both the logical theory machine and the General Troubleshooter, argued that the field should have the much more anonymous name “complex information processing.” This certainly does not inspire the awe that “artificial intelligence” arouses, nor does it convey the idea that machines can think like humans.
However, “complex information processing” is a much better description of what artificial intelligence actually is: parsing complicated data sets and trying to draw conclusions from the stack. Some modern examples of AI are voice recognition (in the form of virtual assistants like Siri or Alexa) and systems that determine what’s in a photo or recommend what to buy or view next. None of these examples compare to human intelligence, but they show that we can do remarkable things with enough information processing.
Whether we call this field “complex information processing” or “artificial intelligence” (or the more ominous Skynet-sounding “machine learning”) is irrelevant. A tremendous amount of work and human ingenuity has gone into building some absolutely incredible applications. Look at as an example GPT-3an in-depth natural language learning model that can generate text indistinguishable from text written by a person (but also hilarious mistake† It is supported by a neural network model that uses more than 170 billion parameters to model human language.
Built on top of GPT-3 the tool is called dall-e, which will produce an image of every fantastic thing a user requests. The updated 2022 version of the tool, Dall-E 2, lets you go even further, as it can “understand” styles and concepts that are quite abstract. For example, if you ask Dall-E to visualize “an astronaut on a horse in the style of Andy Warhol,” you’ll get some images like this:
Dall-E 2 does not perform a Google search to find a similar image; it creates an image based on its internal model. This is a new image built from nothing but math.
Not all AI applications are as groundbreaking as this one. AI and machine learning are used in almost every industry. Machine learning is quickly becoming a must-have in many industries, powering everything from recommendation engines in retail to pipeline safety in the oil and gas industry and diagnosis and patient privacy in healthcare. Not every company has the resources to make tools like Dall-E from scratch, so affordable, viable toolsets are in high demand. The challenge of meeting that demand has parallels to the early days of business computing, when computers and computer programs were rapidly the technology companies are needed. While not everyone needs to develop the next programming language or operating system, many companies want to harness the power of these new fields and need similar tools to help them.