AGI: AI as Panacea, Fools Gold, and Doomsday Dream

by Doug McCord
January 02, 2024

Nvidia, the maker of graphics processing units that are at the heart of AI development, has been in the news a lot over the last year, with a stock surge of 219% and a stunning valuation above 1.1 trillion dollars. But the company stole headlines for a different reason at the end of November when CEO Jensen Huang said in the New York Times Dealbook Summit that he believes AI will be “fairly competitive” with humans “within the next five years.” 

Chatbots and art generation AIs have dominated the news over the past year, but anyone who’s used ChatGPT knows it’s got a long way to go to be competitive with human beings overall (see below for one example 

This measure—AI that can do what a person can do—is the generally accepted definition of AGI or Artificial General Intelligence. As OpenAI’s once-and-again CEO Sam Altman said somewhat controversially: “for me, AGI is the equivalent of the median human that you could hire as a co-worker.”  

DeepMind co-founder and chief AGI scientist Shane Legg has also predicted we’ll have AGI sooner rather than later, telling Dwarkesh Patel on his podcast that he sees no major roadblocks on AGI, and that he thinks there’s a 50/50 chance we’ll have it by 2028 

If this is true, the impact will be truly stunning: AIs that can do what regular people can, across tasks, would transform the world as we know it. In this article we take a look at what AGI is, its true believers and skeptics, and consider how it has become the next target in the AI arms race. 

  

A brief history of AGI and what we mean by it 

The abbreviation AGI—used to describe human-level artificial intelligence—was promoted by scientist and AI researcher Ben Goertzel, starting more than 20 years ago. However, he credits Shane Legg for introducing the term to him, with its actual origin maybe being from a 1997 article.  

Regardless, other terms have existed for this same concept, including Ray Kurzweil’s “strong AI.”