Fast Company

Singularity Scenarios: The Ultimate Innovation or an AI Apocalypse?

If we do have something we can describe as a Singularity, what then?

My talk last weekend at the New York Future Salon explored the likelihood and the implications of the transformative event known as the "Singularity." I tend to part ways with many Singularity enthusiasts over two small issues: what comes before a Singularity, and what comes after.

In terms of what comes before, I'm generally in the camp that machine-substrate intelligence is very likely possible, but is probably a much more complex problem than some of the more enthusiastic Singularitarians would have us think. We currently have a single model of a mind emerging from a physical structure--the human brain--and (as noted by one of the 2009 Singularity Summit speakers, David Chalmers) we're not even sure how that happens. Add to that issues around learning, around complexity, around the very definition of intelligence, and you have the potential for a situation where--even if there are no physical laws preventing the emergence of artificial general intelligence -- "real" AI remains the computer science version of nuclear fusion: perpetually just a couple of decades away (with plenty of dead-ends and showy hoaxes along the way).

The Singularity Story

I've noted elsewhere that I suspect that "a stand-alone artificial mind will be more a tool of narrow utility than something especially apocalyptic." Part of the reason is the difficulty, but another part is the near-certainty that the technologies of human intelligence augmentation will continue apace. The technologies that may be dead-ends for efforts to construct a self-aware artificial mind could easily be of great value as non-conscious assistants to human minds.

The notion that creating "real" AI may turn out to be extraordinarily difficult, and the idea that human intelligence augmentation could itself turn out to be a more promising line of research doesn't get a lot of push-back from the more thoughtful Singularity proponents I've encountered. After all, both have been demonstrably true so far. A tougher sticking point, however, comes when I explore what could come afterwards.

If greater-than-human artificial intelligence emerges out of aggressively competitive projects, each seeking to be first, and is put to use without much thought to what might happen next, then the traditional Singularity scenario seems pretty likely. But that's not the only one:

Singularity Scenarios

The upper-left scenario, "Out of Control," is the more-or-less conventional Singularity story. AI gets smart, gets loose, and does as it will. Could be hell, could be heaven, but pretty much is out of our hands. In short, this is the scenario in which AIs eliminate our civilization.

Upper-right, "Taxes and Allies," is a world where competitive projects lead to real AI, but they're undertaken with a greater awareness of implications and impacts. AIs in this world are held onto as business tools--may in fact be corporations (as in Charlie Stross' novel Accelerando)--but remain embedded in human civilization. This could, by the way, be one of the pathways to a "robonomics" economy. This is the scenario in which AIs become part of our civilization.

Lower-left is "Eat Your Vegetables*." In this scenario, the greater-than-human AIs emerge not from tools of competition, but tools of collaboration--imagine, for example, AIs emerging out of software intended to help humanity manage climate disruption. The result here is a world where these systems are less about artificial intelligence and more about artificial wisdom: assisting us in doing the right things for ourselves and our planet. (Listen to the audio scenario "The Chorus" for an example of this kind of world.) This is the scenario in which AIs take care of our civilization.

Finally, "Djinni in a Bottle" offers a scenario where AIs come from collaborative tools, but in a context of management of impacts and implications. As in the classic tales of djinnis, this is a world in which beneficial and detrimental results occur based on just how wisely we use our power. This is the scenario in which AIs empower the best and worst of our civilization.

Three of the four scenarios (leaving aside "Out of Control") assume that human social intelligence, augmentation technology, and competition continue to develop. And in all three, human civilization--with its resulting conflicts and mistakes, communities and arts, and, yes, politics--remains a vital force even after a Singularity has begun.

One key aspect of the three is that they're not necessarily end states. Each could, given the right drivers, eventually evolve into one of the others. Moreover, all three could in principle exist side-by-side.

I noted earlier that I differ from many of the Singularity enthusiasts in my take on what happens before and what happens after a Singularity. I suppose I differ in my take on what happens during one, as well. I don't think that a Singularity would be visible to those going through one. Even the most disruptive changes are not universally or immediately distributed, and late-followers learn from the reactions and dilemmas of those who had initially encountered the disruptive change.

Perhaps the most notable aspect of a Singularity is that, ultimately, it's only clearly visible in the distance: off in the future, where it remains a mysterious veil; and in the past, after we've moved along far enough to see just how differently we're living our lives.

(When video from my latest talk, a lively over-capacity event, is online, I'll post a link at my home Web site, Open the Future. My talk slides are available via Slideshare.)

*For those of you unfamiliar with this bit of American idiom, it refers to being told (often forced) to do something that's actually good for us, but not necessarily pleasant.

Images:

The Singularity Story and Singularity Scenarios, both by Jamais Cascio, licensed under Creative Commons.

Add New Comment

0 Comments