But what if the robot looks, superficially, exactly like a human being (like my robot, Daneel Olivaw), How can you tell that he’s a robot? Well, in my later robot novels, you can’t, really. Daneel Olivaw is a human being in all respects except that he’s a lot more intelligent than most human beings, a lot more ethical, a lot kinder and more decent, a lot more human. That makes for a good story, too, but it doesn’t help identify a robot in any practical sense. You can’t follow a robot around to see if it is better than a human being, for you then have to ask yourself-is he (she) a robot or just an unusually good human being?
There’s this-
A robot is bound by the Three Laws of Robotics, and a human being is not. That means, for instance, that if you are a human being and you punch someone you think may be a robot and he punches you back, then he is not a robot. If you yourself are a robot, then if you punch him and he punches you back, he may nevertheless be a robot, since he may know that you are a robot, and First Law does not prevent him from hitting you. (That was a key point in my early story, “Evidence.”) In that case, though, you must ask a human being to punch the suspected robot, and if he punches back he is no robot.
However, it doesn’t work the other way around. If you are a human being and you hit a suspected robot, and he doesn’t hit you back, that doesn’t mean he is a robot. He may be a human being, but a coward. He may be a human being but an idealist, who believes in turning the other cheek.
In fact, if you are a human being and you punch a suspected robot and he punches you back, then he may still be a robot, nevertheless.
After all, the First Law says,…A robot may not harm a human being or, through inaction, allow a human being to come to harm.” That, however, begs the question, for it assumes that a robot knows what a human being is in the first place.
Suppose a robot is manufactured to be no better than a human being. Human beings often suppose other people are inferior, and not fully human, if they simply don’t speak your language, or speak it with an odd accent. (That’s the whole point of George Bernard Shaw’s Pygmalion.) In that case, it should be simple to build a robot within whom the definition of a human being includes the speaking of a specific language with a specific accent. Any failure in that respect makes a person the robot must deal with not a human being and the robot can harm or even kill him without breaking the First Law.
In fact, I have a robot in my book Robots and Empire for which a human being is defined as speaking with a Solarian accent, and my hero is in danger of death for that very reason.
So you see it is not easy to differentiate between a robot and a human being.
We can make the matter even more difficult, if we suppose a world of robots that have never seen human beings. (This would be like our early unsophisticated human beings who have never met anyone outside their own tribe.) They might still have the First Law and might still know that they must not harm a human being-but what is this human being they must not harm?
They might well think that a human being is superior to a robot in some ways, since that would be one reason why he must not be harmed. You ought not to offer violence to someone worthier than yourself.
On the other hand, if someone were superior to you, wouldn’t it be sensible to suppose that you couldn’t harm him? If you could, wouldn’t that make him inferior to you? The fallacy there ought to be obvious. A robot is certainly superior to an unthinking rock, yet a falling rock might easily harm or even destroy a robot. Therefore the inferior can harm the superior, but in a well-run Universe it should not do so.
In that case, a robot beginning only with the Laws of Robotics might well conclude that human beings were superior to robots.
But then, suppose that in this world of robots, one robot is superior to all the rest. Is it possible, in that case, that this superior robot, who has never seen a human being, might conclude that he himself is a human being?
If he can persuade the other robots that this is so then the Laws of Robotics will govern their behavior toward him and he may well establish a despotism over them. But will it differ from a human despotism in any way? Will this robot-human still be governed and limited by the Three Laws in certain respects, or will it be totally free of them?
In that case, if it has the appearance and mentality and behavior of a human being, and if it lacks the Three Laws, in what way is it not a human being? Has it not become a human being in actuality?
And what happens if, then, real human beings appear on the scene? Do the Three Laws suddenly begin to function again in the robot-human, or does he persist in considering himself human? In my very first published robot story, “Reason,” come to think of it, I described a robot that considered himself to be superior to human beings and could not be argued out of it.
So what with one thing or another, the problem of defining a human being is enormously complex, and while in my various stories I’ve dealt with different aspects of it, I am glad to leave the further consideration of that problem to Robert Thurston in this third book of the Robots and Aliens series.
Chapter 1. Robot City Dreams
Derec knew he was dreaming. The street he now ambled down wasn’t real. There had never been a street anywhere in Robot City like this distorted thoroughfare. Still, too much was familiar about it, and that really scared him.
The Compass Tower, now too far in the distance, had changed, too. There seemed to be lumps allover its surfaces, but that was impossible. In a city where buildings could appear and disappear overnight, the Compass Tower was the only permanent, unchangeable structure.
It was possible this strange street was newly created, but he doubted that. It was a dream-street, plain and simple, and this had to be a dream. Anyway, where were the robots? Nobody could travel this far along a Robot City street without encountering at least a utility robot scurrying along, on its way to some regular task; or a courier robot, its claws clutching tools; or a witness robot, checking the movements of the humans. During a stroll like this, Derec should have encountered a robot every few steps.
No, it was absolutely certain this was a dream. What he was doing was sleeping in his ship somewhere in space between the blackbody planet and Robot City. He had just come off duty after dealing with the Silversides for hours, a task that would tire a saint.
At one time, just after his father had injected chemfets into his bloodstream, he had regularly dreamed of Robot City, but it turned out that his harrowing nightmares had all been induced by a monitor that his father had implanted in his brain. The monitor had been trying to establish contact so he could be aware of the nature of the chemfets, which were tiny circuit boards that grew in much the same manner as the city itself had. Replicating in his bloodstream and programmed by his father, they were a tiny robot city in his body, one that gave him psycho-electronic control over the city’s core computer and therefore all its robots. After he had known this and the chemfets’ replication process had stabilized, he had had no more nightmares of a distorted Robot City.
Until now.
Since he was so aware he was dreaming, perhaps this was what Ariel had explained to him as a “lucid dream.” In the lucid dream state, she said, the dreamer could control the events of the dream. He wanted to control this dream, but at the moment he couldn’t think of anything particular to do.