But as time passes, we should see more symptoms. The dilemma felt by science-fiction writers will be perceived in other creative endeavors. (I have heard thoughtful comicbook writers worry about how to create spectacular effects when everything visible can be produced by the technologically commonplace.) We will see automation replacing higher-and higher-level jobs. We have tools right now (symbolic math programs, cad/cam) that release us from most low-level drudgery. Put another way: the work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we will see the predictions of true technological unemployment finally come true.
Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace.
And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected — perhaps even by the researchers involved (“But all our previous models were catatonic! We were just tweaking some parameters …”). If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly awakened.
And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Posthuman era. And for all my technological optimism, I think I’d be more comfortable if I were regarding these transcendental events from one thousand years’ remove … instead of twenty.
2. Can the Singularity Be Avoided?
Well, maybe it won’t happen at alclass="underline" sometimes I try to imagine the symptoms we should expect to see if the Singularity is not to develop. There are the widely respected arguments of Penrose [3] and Searle [4] against the practicality of machine sapience. In August 1992, Thinking Machines Corporation held a workshop to investigate “How We Will Build a Machine That Thinks.” As you might guess from the workshop’s title, the participants were not especially supportive of the arguments against machine intelligence. In fact, there was general agreement that minds can exist on nonbiological substrates and that algorithms are of central importance to the existence of minds. However, there was much debate about the raw hardware power present in organic brains. A minority felt that the largest 1992 computers were within three orders of magnitude of the power of the human brain. The majority of the participants agreed with Hans Moravec’s estimate [5] that we are ten to forty years away from hardware parity. And yet there was an other minority who conjectured that the computational competence of single neurons may be far higher than generally believed. If so, our present computer hardware might be as much as ten orders of magnitude short of the equipment we carry around in our heads. If this is true (or for that matter, if the Penrose or Searle critique is valid), we might never see a Singularity. Instead, in the early ’00s we would find our hardware performance curves beginning to level off — because of our inability to automate the design work needed to support further hardware improvements. We’d end up with some very powerful hardware, but without the ability to push it further. Commercial digital signal processing might be awesome, giving an analog appearance even to digital operations, but nothing would ever “wake up” and there would never be the intellectual runaway that is the essence of the Singularity. It would likely be seen as a golden age … and it would also be an end of progress. This is very like the future predicted by Gunther Stent [6], who explicitly cites the development of transhuman intelligence as a sufficient condition to break his projections.
But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue. The competitive advantage — economic, military, even artistic — of every advance in automation is so compelling that forbidding such things merely assures that someone else will get them first.
Eric Drexler has provided spectacular insights about how far technical improvement may go [7]. He agrees that superhuman intelligences will be available in the near future. But Drexler argues that we can confine such transhuman devices so that their results can be examined and used safely.
I argue that confinement is intrinsically impractical. Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate — say — one million times slower than you, there is little doubt that over a period of years (your time) you could come up with a way to escape. I call this “fast thinking” form of superintelligence “weak superhumanity.” Such a “weakly superhuman” entity would probably burn out in a few weeks of outside time. “Strong superhumanity” would be more than cranking up the clock speed on a human-equivalent mind. It’s hard to say precisely what “strong superhumanity” would be like, but the difference appears to be profound. Imagine running a dog mind at very high speed. Would a thousand years of doggy living add up to any human insight? Many speculations about superintelligence seem to be based on the weakly superhuman model. I believe that our best guesses about the post-Singularity world can be obtained by thinking on the nature of strong superhumanity. I will return to this point.
Another approach to confinement is to build rules into the mind of the created superhuman entity. I think that any rules strict enough to be effective would also produce a device whose ability was clearly inferior to the unfettered versions (so human competition would favor the development of the more dangerous models).
If the Singularity can not be prevented or confined, just how bad could the Posthuman era be? Well … pretty bad. The physical extinction of the human race is one possibility. (Or, as Eric Drexler put it of nanotechnology: given all that such technology can do, perhaps governments would simply decide that they no longer need citizens.) Yet physical extinction may not be the scariest possibility. Think of the different ways we relate to animals. A Posthuman world would still have plenty of niches where human-equivalent automation would be desirable: embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [8] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. Others might be very humanlike, yet with a onesidedness, a dedication that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new environment to what we call human now.
I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of humans’ natural competitiveness and the possibilities inherent in technology. And yet: we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, to make things happen in ways that are less inimical than others. Of course (as with starting avalanches), it may not be clear what the right guiding nudge really is: