Выбрать главу

But even so, could we be sure that our machines could always tell an enemy from a friend? Even when all our weapons are controlled by human hands and human brains, there is the problem of “friendly fire. “ American weapons can accidentally kill American soldiers or civilians and have actually done so in the past. This is human error, but nevertheless it’s hard to take. But what if our robot weapons were to accidentally engage in “friendly fire” and wipe out American people, or even just American property? That would be far harder to take (especially if the enemy had worked out stratagems to confuse our robots and encourage them to hit our own side). No, I feel confident that attempts to use robots without safeguards won’t work and that, in the end, we will come round to the Three Laws.

Intelligences Together

In “Our Intelligent Tools” I mentioned the possibility that robots might become so intelligent that they would eventually replace us. I suggested, with a touch of cynicism, that in view of the human record, such a replacement might be a good thing. Since then, robots have rapidly become more and more important in industry, and, although they are as yet quite idiotic on the intelligence scale, they are advancing quickly.

Perhaps, then, we ought to take another look at the matter of robots (or computers-which are the actual driving mechanism of robots) replacing us. The outcome, of course, depends on how intelligent computers become and whether they will become so much more intelligent than we are that they will regard us as no more than pets, at best, or vermin, at worst. This implies that intelligence is a simple thing that can be measured with something like a ruler or a thermometer (or an IQ test) and then expressed in a single number. If the average human being is measured as 100 on an overall intelligence scale, then as soon as the average computer passes 100, we will be in trouble.

Is that the way it works, though? Surely there must be considerable variety in such a subtle quality as intelligence; different species of it, so to speak. I presume it takes intelligence to write a coherent essay, to choose the right words, and to place them in the right order. I also presume it takes intelligence to study some intricate technical device, to see how it works and how it might be improved-or how it might be repaired if it had stopped working. As far as writing is concerned, my intelligence is extremely high; as far as tinkering is concerned, my intelligence is extremely low. Well, then, am I a genius or an imbecile? The answer is: neither. I’m just good at some things and not good at others-and that’s true of everyone of us.

Suppose, then, we think about the origins of both human intelligence and computer intelligence. The human brain is built up essentially of proteins and nucleic acids; it is the product of over 3 billion years of hit-or-miss evolution; and the driving forces of its development have been adaptation and survival. Computers, on the other hand, are built up essentially of metal and electron surges; they are the product of some forty years of deliberate human design and development; and the driving force of their development has been the human desire to meet perceived human needs. If there are many aspects and varieties of intelligence among human beings themselves, isn’t it certain that human and computer intelligences are going to differ widely since they have originated and developed under such different circumstances, out of such different materials, and under the impulse of such different drives?

It would seem that computers, even comparatively simple and primitive specimens, are extraordinarily good in some ways. They possess capacious memories, have virtually instant and unfailing recall, and demonstrate the ability to carry through vast numbers of repetitive arithmetical operations without weariness or error. If that sort of thing is the measure of intelligence, then already computers are far more intelligent than we are. It is because they surpass us so greatly that we use them in a million different ways and know that our economy would fall apart if they all stopped working at once.

But such computer ability is not the only measure of intelligence. In fact, we consider that ability of so little value that no matter how quick a computer is and how impressive its solutions, we see it only as an overgrown slide rule with no true intelligence at all. What the human specialty seems to be, as far as intelligence is concerned, is the ability to see problems as a whole, to grasp solutions through intuition or insight; to see new combinations; to be able to make extraordinarily perceptive and creative guesses. Can’t we program a computer to do the same thing? Not likely, for we don’t know how we do it.

It would seem, then, that computers should get better and better in their variety of point-by-point, short-focus intelligence, and that human beings (thanks to increasing knowledge and understanding of the brain and the growing technology of genetic engineering) may improve in their own variety of whole-problem, long-focus intelligence. Each variety of intelligence has its advantages and, in combination, human intelligence and computer intelligence-each filling in the gaps and compensating for the weaknesses of the other-can advance far more rapidly than either one could alone. It will not be a case of competing and replacing at all, but of intelligences together, working more efficiently than either alone within the laws of nature.

My Robots

I wrote my first robot story, “Robbie,” in May of 1939, when I was only nineteen years old.

What made it different from robot stories that had been written earlier was that I was determined not to make my robots symbols. They were not to be symbols of humanity’s overweening arrogance. They were not to be examples of human ambitions trespassing on the domain of the Almighty. They were not to be a new Tower of Babel requiring punishment.

Nor were the robots to be symbols of minority groups. They were not to be pathetic creatures that were unfairly persecuted so that I could make Aesopic statements about Jews, Blacks or any other mistreated members of society. Naturally, I was bitterly opposed to such mistreatment and I made that plain in numerous stories and essays-but not in my robot stories.

In that case, what did I make my robots?-I made them engineering devices. I made them tools. I made them machines to serve human ends. And I made them objects with built-in safety features. In other words, I set it up so that a robot could not kill his creator, and having outlawed that heavily overused plot, I was free to consider other, more rational consequences.

Since I began writing my robot stories in 1939, I did not mention computerization in their connection. The electronic computer had not yet been invented and I did not foresee it. I did foresee, however, that the brain had to be electronic in some fashion. However, “electronic” didn’t seem futuristic enough. The positron-a subatomic particle exactly like the electron but of opposite electric charge-had been discovered only four years before I wrote my first robot story. It sounded very science fictional indeed, so I gave my robots “positronic brains” and imagined their thoughts to consist of flashing streams of positrons, coming into existence, then going out of existence almost immediately. These stories that I wrote were therefore called “the positronic robot series,” but there was no greater significance than what I have just described to the use of positrons rather than electrons.

At first, I did not bother actually systematizing, or putting into words, just what the safeguards were that I imagined to be built into my robots. From the very start, though, since I wasn’t going to have it possible for a robot to kill its creator, I had to stress that robots could not harm human beings; that this was an ingrained part of the makeup of their positronic brains.