The 2009 Singularity Summit is coming up this weekend (October 3-4) in New York City. If previous years are any guide, the next week or so will be filled with breathless articles on tech sites and magazines about the Coming Artificial Intelligence (AI) Revolution, seemingly random references to Kurzweil on Twitter, and oddly edited radio interviews with conference attendees, highlighting fringe ideas at the expense of serious conversation. I spoke at the 2007 Singularity Summit, so this is all quite familiar to me.
If you're not quite sure what this Singularity thing is all about, you're not alone. The use of the term "Singularity" comes from physics, where it describes a point of collapsed space-time, typically at the center of a black hole; the underlying claim is that all of our knowledge of how the universe works is irrelevant within a Singularity. This is the root of the metaphorical use of the term—after a Singularity event, everything we know will change in ways we can't now understand. In the early 1990s, Mathematician and science fiction writer Vernor Vinge was the first to clearly articulate this usage of the term, and begins his essay as follows:
Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
By "superhuman intelligence," Vinge meant one of four different outcomes: We make machines that are intelligent, our computer networks "wake up" as an intelligent entity, human-computer interfaces become so intimate and powerful that the combination forms a superhuman intelligence, or we bioengineer ourselves with smarter brains. In this model, once we have an example of a greater-than-human intelligence, that in turn gives us the means to make even greater intelligence, and so forth. And because the so-much-greater-than-human intelligences will be able to figure out how to do spectacular things, the world that they will make (and remake) will very rapidly become something utterly unrecognizable to mere human minds.
The more common present-day usage of the term "Singularity" actually has a few different definitions, depending upon who you're talking to and how "serious" they are about the subject.
At the broadest end, the Singularity refers to a point in the future where technologically driven changes have hit so hard and so fast that people on the near side of the Singularity wouldn't be able to understand the lives of people living on the far side of one; the lives of the post-Singularity citizens simply wouldn't make sense to pre-Singularity folks. People who adopt this perspective tend to weave all sorts of future-y technological things into it, from radical longevity to personal robots to geoengineering, but always with the underlying point that these things are really disruptive to our lives. This is the usage that I am personally most comfortable with.
In its narrowest form, conversely, "Singularity" refers exclusively to the process described by Vinge, the creation of greater-than-human intelligence, which is then able to make itself even smarter, and so on in something that is occasionally called an "intelligence explosion"; whether or not the lives of post-Singularity citizens would make sense is an irrelevant question—the ultra-intelligent entities would be so much smarter and more powerful that human beings are little more than ants in comparison. People who adopt this perspective often get annoyed at the first group for talking about things that aren't related to intelligence, and tend to see the Singularity as something that could easily lead to the End of Everything. Many of the people associated with the Singularity Summit take this approach.
Somewhere in the middle are those who take the intelligence explosion concept, and use that as the engine for all sorts of ultra-tech fun: brain uploads, "computronium," endless digital lives lived in virtual worlds, and the like. These folks, as opposed to the former group, tend to see the Singularity as something generally desirable. They will, of course, acknowledge the potential for Bad Things to happen, but that's not the thrust of their arguments. The Singularity as presented in Ray Kurzweil's books falls into this category.
Despite the presence of the Singularity concept within various (largely online) sub-cultures, it remains on the edges of common discussion. That's hardly a surprise; the Singularity concept doesn't sit well with most people's visions of what tomorrow will hold (it's the classic "the future is weirder than I expect" scenario). Moreover, many of the loudest voices discussing the topic do so in a manner that's uncomfortably messianic. Assertions of certainty, claims of inevitability, and the dismissal of the notion that humankind has any choice in the matter—all for something that cannot be proven, and is built upon a nest of assumption—do tend to drive away people who might otherwise find the idea intriguing.
And that's a problem, as the core of the Singularity argument is actually pretty interesting, and worth thinking about. Increasing functional intelligence—whether through smarter machines or smarter people—will almost certainly disrupt how we live in pretty substantial ways, for better and for worse. And there have been periods in our history where the combination of technological change and social change has resulted in quite radical shifts in how we live our lives—so radical that the expectations, norms, and behaviors of pre-transformation societies soon become out of place in the post-transformation world.
Two examples of this kind of radical shift from our history are urbanization—the development, thousands of years ago, of large-scale, permanent cities—and the development of the printing press. Both of these technologies (and yes, urbanization is a technological development) increased the power of those who adopted them substantially over those who abstained; and both resulted in a reshaping of politics, economics, and the social order. Of course, they were both considerably slower than present-day visions of a Singularity describe. That's a function of how fast innovations—and the power shifts they produce—propagate.
Were they "singularities?" Not by the strict "intelligence explosion" definition (although a case can be made for the printing press as being a slow-motion Singularity in that regard). They definitely fit better with the broader "things get really weird" definition. But they also underscore an issue that's worth understanding in a Singularity, however defined. They weren't exclusively technological; they were technosocial. How they came about, and how they evolved, depended as much on social, cultural, and political forces as on technological capacities.
Yet few of the discussions about the Singularity — pro or con — move beyond the technology. Can machines think? Will IA (intelligence augmentation) beat AI (artificial intelligence)? How many teraflops does a brain run? There's too little discussion of how the social, cultural and political choices we make would shape the onset or even the possibility of a Singularity.
I hope to change that. On October 3, I'll be giving a talk entitled "If I Can't Dance, I Don't Want to be Part of Your Singularity"* for the New York Futures Salon, at 7pm. The talk is open to the public—if you're in the area, please come on by.
Next week, I'll write a bit here about the ensuing conversation.
* The title refers to the line attributed to Emma Goldman about the socialist movements of her era (the early 20th century): "If I can't dance, I don't want to be part of your revolution."