advertisement

Artificial general intelligence is a breakthrough innovation that OpenAI and its rivals are either trying to achieve—or prevent.

What is AGI in AI, and why are people so worried about it?

[Source image: Enis Can Ceyhan/Unsplash]

BY Mark Sullivan3 minute read

We used to worry about AI becoming “sentient,” or that something called the “singularity” would occur and AIs would begin creating other AIs on their own. The new goal posts are something called artificial general intelligence, or AGI—a term that’s being subsumed into the realm of AI marketing and influence-pushing.

Here’s what you need to know.

How do we define AGI?

AGI usually describes systems that can learn to accomplish any intellectual task that human beings can perform, and perform it better. An alternative definition from Stanford’s Institute for Human-Centered Artificial Intelligence defines AGI as “broadly intelligent, context-aware machines . . . needed for effective social chatbots or human-robot interaction.” The consulting company Gartner defines artificial general intelligence as “a form of AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks and domains. It can be applied to a much broader set of use cases and incorporates cognitive flexibility, adaptability, and general problem-solving skills.”

Gartner’s definition is particularly interesting because it nods at the aspect of AGI that makes us nervous: autonomy. Superintelligent systems of the future might be smart enough (and unsafe enough) to work outside of a human operator’s awareness, or work together toward goals they set for themselves.

What’s the difference between AGI and AI?

AGI is an advanced form of AI. AI systems include “narrow AI” systems that do just one specific thing, like recognizing objects within videos, with a cognitive level lesser than humans. An AGI refers to systems that are generalists; that is, they can learn to do a wide variety of tasks at a cognitive level equal to or greater than a human. Such a system might be used to help a human plan a complex trip one day and to find novel combinations of cancer drug compounds the next.

Should we fear AGI? 

Is it time to become concerned about AGI? Probably not. Current AI systems have not risen to the level of AGI. Not yet. But many people inside and outside the AI industry believe that the advent of large language models like GPT-4 have shortened the timeline for reaching that goal.

There’s currently much debate within AI circles about whether AGI systems are inherently dangerous. Some researchers believe that AGI systems are inherently dangerous because their generalized knowledge and cognitive skill will permit them to invent their own plans and objectives. Other researchers believe that getting to AGI will be a gradual, iterative, process in which there will be time to build in thoughtful safety guardrails at every step. 

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics