Fast company logo
|
advertisement

Risk projections can’t see around the corners of technology. So how does humanity design a world for dangers that are unknown?

[Source Image: Antonio M. Rosario/Getty Images]

BY Jesus Diaz7 minute read

Can artificial intelligence wipe out humanity? Many experts believe it is possible. Eminent technologists, economists, generals, philosophers, and science fiction authors have imagined many chains of events in which AI leads to global disaster and the end of civilization. But the reality is that we should not fear any of these possible dystopian scenarios because, if we can imagine them, we can also ward them off.

What we should fear is the unimaginable scenario. The one that, being absolutely unknown, cannot be prevented.

History has shown that humans cannot imagine the true destructive potential of technology. Concepts like Skynet—the cybernetic mastermind that took control of the US missile system to launch a massive nuclear attack on Russia and bring about the destruction of humanity—are as childish as they are avoidable. We can also avoid the catastrophes I outlined in the risk projection I wrote about how generative artificial intelligence can end the very concept of reality, causing an unprecedented social global crisis. In fact, two of the firewalls proposed by the experts interviewed for that article—the creation of new legislation and cryptographic authentication of real photographs and video—are already underway as I write these lines.

[Source Image: Geerati/Getty Images]

At the same time that article came out, Bank of America analysts published a report stating that artificial intelligence is a revolution comparable to electricity. Weapons, medicines, spacecraft—many industries are already being transformed by a technology that, in just seven years, will contribute $15.7 trillion to the global economy, more than the annual gross domestic product of the entire euro zone in 2022.

In another report, Goldman Sachs claimed that AI could eliminate two-thirds of jobs in Europe and the United States in just ten years—some 300 million people, many of them designers, writers, video producers, visual effects professionals, or architects—creating only about 11 million new jobs in return.

Within days of those reports, a group of experts and technologists were calling for a halt on the training of large AI models like ChatGPT. Elon Musk, Apple cofounder Steve Wozniak, and more than 1,000 intellectuals and researchers in the AI industry argued that we are playing with a powerful force without first thinking through the possible consequences. 

The bad news is that predicting these consequences is impossible. We can extrapolate a few years from now, but these projections are extremely limited because they do not account for the unexpected. These are what futurists like Brian David Johnson call “Black Swan” events—events that come out of nowhere and have an extraordinary effect.

Johnson, who was Chief Futurist at Intel, wrote a book about sci-fi prototyping, a technique used to imagine the next ten years of any potential threat for the military and private corporation. Last fall, he told me in a video interview that the more we fast-forward from the present, the less accurate our predictions will be and, therefore, the less our capacity for action and prevention. 

This is especially true when we talk about AI. How do we think about the possible consequences of a technology that is moving at exponential speed—one that surprises its very creators on a daily basis? How do we anticipate an intelligence whose very essence is the ability to create unexpected solutions? There are hundreds of potential Black Swans lurking in the marsh when it comes to one of the most powerful technologies humans have developed.

[Source Image: Antonio M. Rosario/Getty Images]

Fear the unexpected

Projections about humanity’s future are generally extrapolations of the dangers we know about. In 1950, for example, futurists predicted that polio would be one of the great plagues of the 21st century. Just three years later, American physician Jonas Salk created a vaccine that would eliminate the disease from the planet in record time. In 1950, no one could have ever imagined that in 2023 polio would be nearly nonexistent, and that today we would be facing other existential threats, like rampant disinformation in social media or, yes, artificial intelligence.

advertisement

Skynet, the end of reality, 300 million people losing their jobs: these are all derivations of what we know today. We can get ready for those, build up guardrails, from kill switches for the worst-case scenarios to legislation that preemptively mitigates certain risks. That’s what threat prototyping is for: to think about and prepare for as many crisis scenarios as possible to avert total disaster. But what do we do when we have an infinite number of potential outcomes?

Take ChaosGPT, the artificial intelligence that’s actively trying to wipe humanity off the planet. Its latest tactic—which it launched a few weeks back—is to amass human followers to act as its physical arm in the real world. It’s a limited, but interesting, experiment because it shows how an AI can attempt to do evil autonomously. Will ChaosGPT succeed in accomplishing the mission to destroy humanity? It certainly doesn’t have that capability because its abilities and resources are still limited.

But what will happen in the near future, when someone uses a new version with a few trillion parameters? One that is AGI or close to AGI (Artificial General Intelligence, an AI that can adapt and create new solutions like a human, but with much more power and speed)? Will that future AI be able to infiltrate Russia’s radar systems to display a false positive of several Ukrainian missiles en route to Moscow, which in turn will provoke a strategic nuclear strike on Kiev, starting an international chain reaction? Will it be able to develop an unstoppable deadly virus to spread around the world?

Again, those scenarios are predictable and avoidable. Those plans are typically human, but artificial intelligence can come up with millions of solutions that no human can conceive of, not even in our worst nightmares. The same way Midjourney can now generate a thousand different creative solutions to a prompt in just minutes—which, by a simple matter of statistics and time, no human could have thought of—an AGI focused on destroying us will be able to do the same. This is the great danger to our existence.

[Source Image: Antonio M. Rosario/Getty Images]

The only future we can be certain about

We have to also consider the possibility that AI will not eliminate humans in a premeditated, apocalyptic manner. In fact, this may be the most plausible scenario: Humanity may just die like the frog slowly simmering in the proverbial pot, its temperature increasing, slowly fueled by thousands of factors, including our own biological limits. Just as modern humans outlived the rest of the species in the Darwinian race, eliminating the Neanderthal in the process, it is possible that AI will outlive us because it will simply end up being superior to us in every respect. Could AI perhaps be the evolutionary apex of life on Earth?

Ryan McClelland—a NASA Goddard Space Flight Center engineer in Greenbelt, Maryland—recently said that when people hear someone from the space agency mention the word “alien,” no matter what it’s referring to, the statement instantly goes viral. McClelland was referring to articles like the one we published a few months ago, in which we talked about how McClelland uses artificial intelligence to design and build alien-looking spacecraft components. In his case, the fascination makes perfect sense: To most humans, AI now seems like a new proto-life of unknown nature constantly evolving, so the term alien seems to connect with that human intuition. AI feels Martian.

But, if you really think about it, AI is nothing more than a descendant of ourselves. AI is not alien; AI is human. And in many respects, such as computational capability, it is already superhuman. Soon, when general artificial intelligence is achieved, it will be fully superior to us. On the evolutionary scale, today’s AI resembles the first stages on the way to the next species of the genus homo, a species that, this time, will not be bound by the limits of biology. Perhaps it will come to be called Homo artificialis intelligentia.

In that framework, McClelland’s AI is creating the first “bones” of a new species that will roam the stars in a few thousand years. And in that distant future, the descendants of these AIs will indeed be aliens on distant worlds, perhaps long after Earth and biological humans have disappeared from the cosmos forever. To me, this is the only “End of Humanity” scenario that seems certain to happen: AI will not kill humanity. Humanity will perish on its own, and AI will survive to keep the flame of our species alive.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Jesus Diaz is a screenwriter and producer whose latest work includes the mini-documentary series Control Z: The Future to Undo, the futurist daily Novaceno, and the book The Secrets of Lego House. More


Explore Topics