Fast company logo
|
advertisement

Artificial general intelligence is a breakthrough innovation that OpenAI and its rivals are either trying to achieve—or prevent.

What is AGI in AI, and why are people so worried about it?

[Source image: Enis Can Ceyhan/Unsplash]

BY Mark Sullivan3 minute read

We used to worry about AI becoming “sentient,” or that something called the “singularity” would occur and AIs would begin creating other AIs on their own. The new goal posts are something called artificial general intelligence, or AGI—a term that’s being subsumed into the realm of AI marketing and influence-pushing.

Here’s what you need to know.

How do we define AGI?

AGI usually describes systems that can learn to accomplish any intellectual task that human beings can perform, and perform it better. An alternative definition from Stanford’s Institute for Human-Centered Artificial Intelligence defines AGI as “broadly intelligent, context-aware machines . . . needed for effective social chatbots or human-robot interaction.” The consulting company Gartner defines artificial general intelligence as “a form of AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks and domains. It can be applied to a much broader set of use cases and incorporates cognitive flexibility, adaptability, and general problem-solving skills.”

Gartner’s definition is particularly interesting because it nods at the aspect of AGI that makes us nervous: autonomy. Superintelligent systems of the future might be smart enough (and unsafe enough) to work outside of a human operator’s awareness, or work together toward goals they set for themselves.

What’s the difference between AGI and AI?

AGI is an advanced form of AI. AI systems include “narrow AI” systems that do just one specific thing, like recognizing objects within videos, with a cognitive level lesser than humans. An AGI refers to systems that are generalists; that is, they can learn to do a wide variety of tasks at a cognitive level equal to or greater than a human. Such a system might be used to help a human plan a complex trip one day and to find novel combinations of cancer drug compounds the next.

Should we fear AGI? 

Is it time to become concerned about AGI? Probably not. Current AI systems have not risen to the level of AGI. Not yet. But many people inside and outside the AI industry believe that the advent of large language models like GPT-4 have shortened the timeline for reaching that goal.

There’s currently much debate within AI circles about whether AGI systems are inherently dangerous. Some researchers believe that AGI systems are inherently dangerous because their generalized knowledge and cognitive skill will permit them to invent their own plans and objectives. Other researchers believe that getting to AGI will be a gradual, iterative, process in which there will be time to build in thoughtful safety guardrails at every step. 

How far away is AGI?

There’s a lot of disagreement over how soon the artificial general intelligence moment will arrive. Microsoft researchers say they’ve already seen “sparks” of AGI in GPT-4 (Microsoft owns 49% of OpenAI). Anthropic CEO Dario Amodei says AGI will arrive in just two to three years. DeepMind co-founder Shane Legg predicts that there is a 50% chance AGI will arrive by 2028.

Google Brain cofounder and current Landing AI CEO Andrew Ng says the tech industry is still “very far” from achieving systems smart enough to do things like that. And he’s concerned about the misuse of the term itself. “The term AGI is so misunderstood,” he says.

“I think that it’s very muddy definitions of AGI that make people jump on the ‘are we getting close to AGI?’ question,” Ng says. “And the answer is no, unless you change the definition of AGI, in which case you could totally be there in three years or maybe even 30 years ago.” 

Why AGI is still so divisive in the broader AI field

People may be stretching the definition of AGI to suit their own ends, Ng believes. “The problem with redefining things is people are so emotional, positive and negative; they have hopes and fears attached to the term AGI. And when you have companies that say they reached AGI because they changed the definition, it just generates a lot of hype.”

OpenAI’s definition of the term, in fact, has been somewhat flexible. The company, whose stated goal is to create AGI, defines artificial general intelligence in its charter (published in 2018) as “highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” But OpenAI’s CEO Sam Altman has more recently defined AGI as “AI systems that are generally smarter than humans,” a seemingly lower bar to hit.Hype can fuel interest and investment in a technology but it can also create a bubble of expectations that, when unmet, eventually bursts. That’s perhaps the biggest risk to the current AI boom. Some very good things might result from advances in generative AI, but it will take time.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics