Please Select Your Location
Australia
Österreich
België
Canada
Canada - Français
中国
Česká republika
Denmark
Deutschland
France
HongKong
Iceland
Ireland
Italia
日本
Korea
Latvija
Lietuva
Lëtzebuerg
Malta
المملكة العربية السعودية (Arabic)
Nederland
New Zealand
Norge
Polska
Portugal
Russia
Saudi Arabia
Southeast Asia
España
Suisse
Suomi
Sverige
台灣
Ukraine
United Kingdom
United States
Please Select Your Location
België
Česká republika
Denmark
Iceland
Ireland
Italia
Latvija
Lietuva
Lëtzebuerg
Malta
Nederland
Norge
Polska
Portugal
España
Suisse
Suomi
Sverige
<< Back to Blog

AGI 101: Unraveling the Future of AI with DeepMind's New Framework

VIVE POST-WAVE Team • Nov. 17, 2023

8-minute read

Introduction and Limitations of the 9 Definitions of AGI

OpenAI CEO Sam Altman recently revealed in an interview with the Financial Times that he is seeking more funding from Microsoft in order to further develop "Artificial General Intelligence" (AGI). It seems that AGI has become the ultimate goal for various AI industry leaders, including Deepmind. Deepmind recently published a paper titled "Levels of AGI: Operationalizing Progress on the Path to AGI," which provides a new framework for understanding AGI and its development. As the paper states, many AI researchers and organizations have tried to define AGI. Let's take a look at the examples of the attempts to define AGI throughout history, as pointed out by the paper.

1. The Turing Test

Let's start with the OG "Turing Test." In 1950, Turing published a groundbreaking paper in which he predicted the possibility of creating intelligent machines. Since it was difficult to precisely define "intelligence," he proposed the Turing Test - a machine that can engage in conversation with a human without being recognized as a machine would pass this test.

Even though a lot of different LLMs, including ChatGPT, have all seemingly passed the Turing Test, they're still a long way from being considered true Artificial General Intelligence.

Alan Turing, the father of AI
Alan Turing, the father of AI. (Source: Wikipedia)

2. Strong AI: Systems Processing Consciousness

"Strong Artificial Intelligence" is a concept proposed by philosopher John Searle, who believed that computers can possess thoughts through the right programming and the right algorithms. However, pushing the criteria for AGI towards the more difficult-to-define concept of "consciousness" makes it harder to discuss.

3. Analogies to the Human Brain

The term "AGI" first appeared in an article on military technology by Mark Gubrud in 1997, where he defined AGI as an AI system that matches or surpasses human intelligence in complexity and speed. However, the Deepmind paper points out that although modern machine learning and neural networks are inspired by the human brain, it does not necessarily mean that AGI must or even should develop according to the patterns of the human brain.

4. Human-Level Performance on Cognitive Tasks

Legg and Goertzel proposed that AGI would be achieved when a machine is able to do cognitive tasks that people can perform. But what are cognitive tasks? Who are the human benchmarks? This definition of AGI may sound reasonable, until you take into account the emergence of generative AI. Tools like ChatGPT or Midjourney have already reached and sometimes surpassed human-level performance. Does this really mean we have achieved AGI?

5. Ability to Learn Tasks

Murray Shanahan, a professor of cognitive robotics at Imperial College London, proposed in his book "The Technological Singularity" that AGI is an AI that can learn and perform a wide range of tasks similar to humans. His AGI framework emphasizes the importance of learning and generality.

As a quick aside, Murray Shanahan is the AI ​​consultant for - Ex MachinaAs a quick aside, Murray Shanahan is the AI ​​consultant for “Ex Machina.” (Source: Ex Machina)

6. Economically Valuable Work

According to OpenAI's 2018 Charter, AGI refers to “highly autonomous systems that outperform humans at most economically valuable work.” However, the Deepmind paper believes that this definition is somewhat narrow, as there are many intellectually valuable tasks that do not have clear economic value, such as art and creative work.

7. Flexible and General – The "Coffee Test" and Related Challenges

Gary Marcus, a cognitive psychologist who recently criticized Yann LeCun, also has his own definition of AGI. He believes that AGI refers to AI that is as flexible and general as human intelligence. He lists five task categories, from understanding movies to cooking in any kitchen. The Deepmind paper states that this is similar to Steve Wozniak's "Coffee Test," which refers to AGI being able to make a cup of coffee in any American household by finding the coffee machine, the coffee beans, adding water, finding a cup, pressing the right buttons, and finally brewing a good cup of coffee.

8. Artificial Capable Intelligence

Another AI expert, Mustafa Suleyman, co-founder of Deepmind and founder of Inflection AI, proposed the concept of "Artificial Capable Intelligence" (ACI) in his book "The Coming Wave." He focuses on what AI can do and defines ACI as an AI system with sufficient capability and generality to perform complex, multi-step tasks in an open-world environment. He also presents his version of the "modern Turing Test," where he would give $100,000 to an AI and assume it has passed if it can turn it into $1 million within a few months. Well, if it can do that, I'm impressed too.

9. SOTA LLMs as Generalists

Blaise Agüera y Arcas and Peter Norvig, two researchers from Google, recently argued that current leading large language models are already considered AGI. They believe that "generality" is a key property of AGI, and these language models already have enough generality to engage in extensive discussions, perform various tasks, handle multimodal input/output, and operate in multiple languages.

AGI Levels: From Emerging to Superhuman

In summary, the Deepmind paper then presents a more practical approach to classify and define AGI based on AGI 6 Principles:

  1. Focus on Capabilities, not Processes
  2. Focus on Generality and Performance
  3. Focus on Cognitive and Metacognitive Tasks
  4. Focus on Potential, not Deployment
  5. Focus on Ecological Validity
  6. Focus on the Path to AGI, not a Single Endpoint

Levels of AGI

The Terrifying Threat of AI

Since this Deepmind paper starts with "practicality" as its premise, the levels of AGI are not merely listed for fun. We are all familiar with various AI threat discussions that have emerged since the beginning of the AI boom, but the reason it's difficult to have a focused and precise conversations is that there is no commonly accepted definition for intelligence as a foundation for productive discussion. Therefore, this Deepmind paper provides a new and important framework that clearly outlines the risks and threats associated with different levels of AGI.

Autonomy Level

It is worth noting that the level of AGI does not directly correlate with the level of risk. Instead, it is the human tendency to grant AI "autonomy" for the sake of convenience that leads to potential threats. This is something to be aware of when applying AI, as a high-capability but low-autonomy AGI is relatively safe since its actions are still under human control and supervision. Okay. It seems like this Deepmind paper has made the future of AI development and its potential risks quite clear. Are you still looking forward to an AI-powered future?