The monitor flickered with sharp-edged waveforms, hundreds of them joining to fill the screen until it was a jumble of multicolored lines. “Looks like we caught him in time,” Derec said. “There seems to be quite a bit of mental activity.” He reached up and switched in a filter, and the jumble diminished to a manageable half-dozen or so waveforms. They weren’t actual voltage traces, but rather representations of activity in the various levels of the brain, useful for visualizing certain types of thoughts.
Janet frowned. “ Are those supposed to be the Three Laws?”
“That’s right.”
The pattern was still recognizable as the one built into every positronic brain at the time of manufacture-but just barely. Each of the laws showed in a separate hue of green, but overlaying them all were two companion waves, a deep violet one that split and rejoined much as the Three Laws did, and a lighter blue one weaving in and out around the laws and linking up with other signals from all over the screen. The effect looked as if the violet and blue waves were purposefully entangling the laws, preventing them from altering their potential beyond carefully delineated levels. Janet suspected that was just what they were doing. Visual analogy didn’t always work in describing a robot’s inner workings, but in this case it looked pretty straightforward.
“I’d say that explains a lot,” she said.
Derec flipped to another band, following the two waves as they wove from the Three Laws through the self-awareness section and into the duty queue. “Looks like he’s built a pretty heavy web of rationalization around just about all the pre-defined areas of thought,” he said. “Normal diagnostic procedure would be to wake him up and ask him what all that means, but I don’t think we want to do that just yet. Adam, you know how he thinks; can you make sense of it?”
The one remaining learning machine stepped over to Derec’s side. Adam? Had he known the significance of that name when he chose it, or had it been given to him? Janet supposed the other one would be Eve, then. And this one, the renegade, was Lucius. Why hadn’t he gone for the obvious and called himself Lucifer? She itched to ask them. She had to talk with them soon.
In answer to Derec’s question, Adam said, “The violet potential schematic corresponds to the Laws of Humanics. The blue one is the Zeroth Law of Robotics.”
“Beg your pardon?” Janet asked. “Laws of Humanics? Zeroth Law? What are you talking about?”
Her learning machine looked over at her and said, “We have attempted to develop a set of laws governing human behavior, laws similar to the ones that govern our own. They are, of course, purely descriptive rather than compulsory, but we felt that understanding them might give us an understanding of human behavior which we otherwise lacked. As for the Zeroth Law, we felt that the Three Laws were insufficient in defining our obligations toward the human race in general, so we attempted to define that obligation ourselves.” I
Janet was careful not to express the joy she felt, for fear of influencing the robot somehow, but inside she was ecstatic. This was perfect! Her experiment was working out after all. Her learning machines had begun to generalize from their experiences. “ And what did you come up with?” she asked.
“Bear in mind that these laws describe potential conditions within a positronic brain, so words are inadequate to describe them perfectly; however, they can be expressed approximately as follows. The First Law of Humanics: All beings will do that which pleases them most. The Second Law of Humanics: A sentient being may not harm a friend, or through inaction allow a friend to come to harm. The Third Law of Humanics: A sentient being will do what a friend asks, but a friend may not ask unreasonable things.” He paused, perhaps giving Janet time to assimilate the new laws’ meanings.
Not bad. Not bad at all. Like he’d said, they certainly weren’t compulsory as far as most humans went, but Janet doubted she could have done any better. “And what is your Zeroth Law?” she asked.
“That is much more difficult to state in words, but a close approximation would be that any action should serve the greatest number of humans possible.” Adam nodded toward Lucius. “Lucius has taken the Law a step farther than Eve or I, and we believe it was that step which led him to do what he did to Dr. Avery. He believes that the value of the humans in question should also be considered. “
Eve. She’d guessed right. “And you don’t?”
Adam raised his arms with the palms of his hands up. It took Janet a moment to recognize it as a shrug, since she’d never seen a robot use the gesture before. Adam said, “I am…uncomfortable with the subjectivity of the process. I had hoped to find a more definite operating principle.”
“But Lucius is satisfied with it.”
“That seems to be the case. “
“Why do you suppose he is and you aren’t?”
“Because,” Adam said, again hesitating. “Because he believes himself to be human.”
If the robot were hoping to shock her with that revelation, he was going to be disappointed. Janet had expected something like this would happen from the start; indeed, in a way it was the whole point of the experiment. She waited patiently for the question she knew was coming.
Adam didn’t disappoint her. He looked straight into her eyes with his own metallic ones and said, “Each of us has struggled with this question since we awakened, but none of us have been able to answer it to our mutual satisfaction. You created us, though. Please tell us: are we human?”
Janet used the same palms-up gesture Adam had used. “I don’t know. You tell me.”
Adam knew the sudden surge of conflicting potentials for what it was: frustration. He had experienced enough of it in his short life to recognize it when it happened. This time the frustration came from believing his search for truth was over and suddenly finding that it wasn’t.
He felt a brief Second Law urge to answer her question with a simple declarative statement, but he shunted that aside easily. She obviously wanted more than that, and so did he. She wanted to see the reasoning behind his position; he wanted to see if that reasoning would withstand her scrutiny.
He opened a comlink channel to Eve and explained the situation to her. Together they tried to establish a link with Lucius, but evidently the five volts Derec was supplying him hadn’t been enough to wake him. They would have to do without his input. Adam wasn’t all that disappointed; Lucius’s reasoning had led him to violate the First Law.
Janet was waiting for Adam’s response. Carefully, consulting with Eve at every turn, he began to outline the logic that had led them to their conclusion that any intelligent organic being had to be considered human. He began with his own awakening on Tau Puppis IV and proceeded through the incident with the Ceremyons, through Lucius’s experiments in creating human beings in Robot City, through the robots’ return to Tau Puppis and their dealings with the Kin, to their final encounter with Aranimas. He explained how each encounter with an alien being reinforced the robots, belief that body shape made no difference in the essential humanity of the mind inside it, and how those same contacts had even made differences in intelligence and technological advancement seem of questionable importance.
Throughout his presentation, Adam tried to judge Janet’s reaction to it by her facial expression, but she was giving nothing away. She merely nodded on occasion and said, “I’m with you so far.”
At last he reached the concept of Vitalism, the belief that organic beings were somehow inherently superior to electromechanical ones, and how the robots could find no proof of its validity. He ended with, “That lack of proof led Lucius to conclude that Vitalism is false, and that robots could therefore be considered human. Neither Eve nor I-nor Mandelbrot, for that matter-were able to convince ourselves of this, and now that Lucius ‘ s belief has led him into injuring a human, we feel even less comfortable with it. We don’t know what to believe.”