There was a time, when the earth was to all appearance utterly destitute both of animal and vegetable life, and when according to the opinion of our best philosophers it was simply a hot round ball with a crust gradually cooling. Now if a human being had existed while the earth was in this state and had been allowed to see it as though it were some other world with which he had no concern, and if at the same time he were entirely ignorant of all physical science, would he not have pronounced it impossible that creatures possessed of anything like consciousness should be evolved from the seeming cinder which he was beholding? Would he not have denied that it contained any potentiality of consciousness? Yet in the course of time consciousness came. Is it not possible then that there may be even yet new channels dug out for consciousness, though we can detect no signs of them at present?
When we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.
There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusk has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years: see what strides machines have made in the last thousand! May not the world last twenty million years longer? If so, what will they not in the end become?
My core thesis, which I call the law of accelerating returns (LOAR), is that fundamental measures of information technology follow predictable and exponential trajectories, belying the conventional wisdom that “you can’t predict the future.” There are still many things—which project, company, or technical standard will prevail in the marketplace, when peace will come to the Middle East—that remain unknowable, but the underlying price/performance and capacity of information has nonetheless proven to be remarkably predictable. Surprisingly, these trends are unperturbed by conditions such as war or peace and prosperity or recession.
A primary reason that evolution created brains was to predict the future. As one of our ancestors walked through the savannas thousands of years ago, she might have noticed that an animal was progressing toward a route that she was taking. She would predict that if she stayed on course, their paths would intersect. Based on this, she decided to head in another direction, and her foresight proved valuable to survival.
But such built-in predictors of the future are linear, not exponential, a quality that stems from the linear organization of the neocortex. Recall that the neocortex is constantly making predictions—what letter and word we will see next, whom we expect to see as we round the corner, and so on. The neocortex is organized with linear sequences of steps in each pattern, which means that exponential thinking does not come naturally to us. The cerebellum also uses linear predictions. When it helps us to catch a fly ball it is making a linear prediction about where the ball will be in our visual field of view and where our gloved hand should be in our visual field of view to catch it.
As I have pointed out, there is a dramatic difference between linear and exponential progressions (forty steps linearly is forty, but exponentially is a trillion), which accounts for why my predictions stemming from the law of accelerating returns seem surprising to many observers at first. We have to train ourselves to think exponentially. When it comes to information technologies, it is the right way to think.
The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world wars, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, and all of the other notable events of the late nineteenth, twentieth, and early twenty-first centuries. Some people refer to this phenomenon as “Moore’s law,” but that is a misconception. Moore’s law—which states that you can place twice as many components on an integrated circuit every two years, and they run faster because they are smaller—is just one paradigm among many. It was in fact the fifth, not the first, paradigm to bring exponential growth to the price/performance of computing.
The exponential rise of computation started with the 1890 U.S. census (the first to be automated) using the first paradigm of electromechanical calculation, decades before Gordon Moore was even born. In The Singularity Is Near I provide this graph through 2002, and here I update it through 2009 (see the graph on page 257 titled “Exponential Growth of Computing for 110 Years”). The smoothly predictable trajectory has continued, even through the recent economic downturn.
Computation is the most important example of the law of accelerating returns, because of the amount of data we have for it, the ubiquity of computation, and its key role in ultimately revolutionizing everything we care about. But it is far from the only example. Once a technology becomes an information technology, it becomes subject to the LOAR.
Biomedicine is becoming the most significant recent area of technology and industry to be transformed in this way. Progress in medicine has historically been based on accidental discoveries, so progress during the earlier era was linear, not exponential. This has nevertheless been beneficiaclass="underline" Life expectancy has grown from twenty-three years as of a thousand years ago, to thirty-seven years as of two hundred years ago, to close to eighty years today. With the gathering of the software of life—the genome—medicine and human biology have become an information technology. The human genome project itself was perfectly exponential, with the amount of genetic data doubling and the cost per base pair coming down by half each year since the project was initiated in 1990.3 (All the graphs in this chapter have been updated since The Singularity Is Near was published.)
The cost of sequencing a human-sized genome.1
The amount of genetic data sequenced in the world each year.2
We now have the ability to design biomedical interventions on computers and to test them on biological simulators, the scale and precision of which are also doubling every year. We can also update our own obsolete software: RNA interference can turn genes off, and new forms of gene therapy can add new genes, not just to a newborn but to a mature individual. The advance of genetic technologies also affects the brain reverse-engineering project, in that one important aspect of it is understanding how genes control brain functions such as creating new connections to reflect recently added cortical knowledge. There are many other manifestations of this integration of biology and information technology, as we move beyond genome sequencing to genome synthesizing.