Выбрать главу

Technological Singularity

Vernor Vinge

Magazine: Whole Earth Review

Issue: December 10, 1993

Title: Technological Singularity

Author: Vernor Vinge

Technological Singularity

by Vernor Vinge

Vernor Vinge’s vision of a technological “singularity” in humanity’s near future has haunted me since I first read of it in his science-fiction novel, Marooned in Realtime (1986). I’m persuaded that the acceleration of technology-acceleration is even now distorting human institutions and expectations, whether or not we are approaching a metaphorical “event horizon” beyond which everything becomes unrecognizable.

When I invited Vinge to write something about his current views on the singularity for the recent issue of Whole Earth Review that I guest-edited, he replied that he had just presented a paper on the subject for the VISION-21 Symposium, sponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute. In due course he revised the piece and sent it along. I can think of no other technical paper that has so many references to science-fiction literature, as well it should.

Vinge is a mathematician at San Diego State University, specializing in distributed computing and computer architecture. One of his short stories, “True Names” (1981), is often mentioned along with John Brunner’s Shockwave Rider and William Gibson’s Neuromancer as an inspiration to the current generation of online computer pioneers. Vinge’s two “Realtime” novels (combined in Across Realtime — 1991) have been nominated for Hugo Awards, science fiction’s top prize. His new novel, A Fire Upon the Deep, won the 1993 Hugo; it’s reviewed on p. 95.

—Stewart Brand

–––––––––––––––––––––––––

TECHNOLOGICAL SINGULARITY

(c) 1993 by Vernor Vinge (This article may be reproduced for noncommercial purposes if it is copied in its entirety, including this notice.)

A slightly different version of this article was presented at the VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993. —Vernor Vinge

1. What Is The Singularity?

The acceleration of technological progress has been the central feature of this century. We are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater-than-human intelligence. Science may achieve this breakthrough by several means (and this is another reason for having confidence that the event will occur):

Computers that are “awake” and superhumanly intelligent may be developed. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is “yes,” then there is little doubt that more intelligent beings can be constructed shortly there-after.)

Large computer networks and their associated users may “wake up” as superhumanly intelligent entities.

Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.

Biological science may provide means to improve natural human intellect.

The first three possibilities depend on improvements in computer hardware. Progress in hardware has followed an amazingly steady curve in the last few decades. Based on this trend, I believe that the creation of greater-than-human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for thirty years. Just so I’m not guilty of a relative-time ambiguity, let me be more specific: I’ll be surprised if this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale. The best analogy I see is to the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work — the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct what-if’s in our heads; we can solve many problems thousands of times faster than natural selection could. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.

This change will be a throwing-away of all the human rules, perhaps in the blink of an eye — an exponential runaway beyond any hope of control. Developments that were thought might only happen in “a million years” (if ever) will likely happen in the next century.

It’s fair to call this event a singularity (“the Singularity” for the purposes of this piece). It is a point where our old models must be discarded and a new reality rules, a point that will loom vaster and vaster over human affairs until the notion becomes a commonplace. Yet when it finally happens, it may still be a great surprise and a greater unknown. In the 1950s very few saw it: Stan Ulam [1] paraphrased John von Neumann as saying:

One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

Von Neumann even uses the term singularity, though it appears he is thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed.)

The 1960s saw recognition of some of the implications of superhuman intelligence. I. J. Good wrote:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control… . It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.

Good has captured the essence of the runaway, but he does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind’s “tool” — any more than humans are the tools of rabbits, robins, or chimpanzees.

Through the sixties and seventies and eighties, recognition of the cataclysm spread. Perhaps it was the science-fiction writers who felt the first concrete impact. After all, the “hard” science-fiction writers are the ones who try to write specific stories about all that technology may do for us. More and more, these writers felt an opaque wall across the future. Once, they could put such fantasies millions of years in the future. Now they saw that their most diligent extrapolations resulted in the unknowable … soon. Once, galactic empires might have seemed a Posthuman domain. Now, sadly, even interplanetary ones are.

What about the coming decades, as we slide toward the edge? How will the approach of the Singularity spread across the human world view? For a while yet, the general critics of machine sapience will have good press. After all, until we have hardware as powerful as a human brain it is probably foolish to think we’ll be able to create human-equivalent (or greater) intelligence. (There is the farfetched possibility that we could make a human equivalent out of less powerful hardware — if we were willing to give up speed, if we were willing to settle for an artificial being that was literally slow. But it’s much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen until after the development of hardware that is substantially more powerful than humans’ natural equipment.)