Выбрать главу

She often wondered what Oberon’s design said about her own psychology. The custom-design robots she had built before him—Donald, Caliban, Ariel, Prospero—had all been cutting-edge designs, highly advanced, even, except for Donald, dangerously experimental. Not Oberon. Everything about his design was basic, conservative—even crude. Her other custom-built robots had required highly sophisticated construction and hand-tooled parts. Oberon represented little more than the assembly of components.

“I’ll just go in and freshen up,” she said to Oberon, and headed for the refresher, her mind still on why she had made Oberon the way she had. Once burned, twice shy? she wondered. Of course she had been burned twice already. It was a desire for rebellion against caution that had gotten her into trouble in the first place. And the second place. She found herself thinking back on it all as she stripped and headed into the refresher. The hot water jets of the needle-shower were just what she needed to unwind after the meeting with Prospero.

A few years before, Fredda Leving had been one of Inferno’s leading roboticists, with a well-earned reputation for taking chances, for searching out shortcuts, for impatience.

None of those character traits were exactly well-suited to the thoroughly calcified field of robotics research. There had not been a real breakthrough in robotics for hundreds of years, just an endless series of tiny, incremental advances. Robotics was an incredibly conservative field, caution and safety and care the watchwords at every turn.

Positronic brains had the standard Three Laws of Robotics burned into them, not once, but millions of times, each microcopy of the Laws standing guard to prevent any violation. Each positronic brain was based on an earlier generation of work, and each later generation seemed to include more Three-Law pathing. The line of development went back in an unbroken chain, all the way to the first crude robotic brain built on Earth, untold thousands of years before.

Each generation of positronic brain had been based on the generation that went before—and each generation of design had sought to entwine the Three Laws more and more deeply into the positronic pathway that made up a robotic brain. Indeed, the closest the field had come to a breakthrough in living memory was a way to embed yet more microcopies of the Three Laws into the pathways of a positronic brain.

In principle, there was, of course, nothing wrong with safety. But there was such a thing as overdoing it. If a robotic brain checked a million times a second to see if a First Law violation was about to occur, that meant all other processing was interrupted a million times, slowing up productive work. Very large percentages of processing time, and very large percentages of the volume of the physical positronic brain, were given over to massively, insanely redundant iterations of the Three Laws.

But Fredda had wanted to know how a robot would behave with a modified law set—or with no law set at all. And that meant she was stuck. In order to create a positronic brain without the Three Laws, it would have been necessary to start completely from scratch, abandoning all those thousands of years of refinement and development, almost literally carving the brain paths by hand. Even if she had tried such a thing, the resulting robot brain would have been of such limited capacity and ability that the experiment results would have been meaningless. What point in testing the actions of a No Law robot who had such reduced intellect that it was barely capable of independent action?

There seemed no way around the dilemma. The positronic brain was robotics, and robotics was the positronic brain. The two had become so identified, one with the other, that it proved difficult, if not impossible, for most researchers to think of either one except as an aspect of the other.

But Gubber Anshaw was not like other researchers. He found a way to take the basic, underlying structure of a positronic brain, the underlying pathing that made it possible for a lump of sponge palladium to think and speak and control a body, and place that pathing, selectively, in a gravitonic structure.

A positronic brain was like a book in which all the pages had the Three Laws written on them, over and over, so that each page was half filled with the same redundant information, endlessly repeated, taking up space that thus could not be used to note down other, more useful data. A gravitonic brain was like a book of utterly blank pages, ready to be written on, with no needless clutter getting in the way of what was written. One could write down the Three Laws, if one wished, but the Three Laws were not jammed down the designer’s throat at every turn.

No other robotics lab had been willing to touch Anshaw’s work, but Fredda had jumped at the chance to take advantage of it.

Caliban was the first of her projects to go badly wrong. Fredda had long wanted to conduct a controlled, limited experiment on how a robot without the Three Laws would behave. But for long years, the very nature of robotics, and the positronic robot brain, had rendered the experiment impossible. Once the gravitonic brain was in her hands, however, she moved quickly toward development of a No Law robot—Caliban. He had been intended for use in a short-term laboratory experiment. The plan had been for him to live out his life in a sealed-off, controlled environment. Caliban had, unfortunately, escaped before the experiment had even begun, becoming entangled in a crisis that had nearly wrecked the government, and the reterraforming program on which all else depended.

The second disaster involved the New Law robots, such as Prospero. Fredda had actually built the first of the New Law robots before Caliban. It was only because the world had become aware of Caliban first that people generally regarded him as preceding the New Laws.

But both the New Laws and Caliban were products of Fredda’s concerns that robots built in accordance with the original Three Laws were wrecking human initiative and tremendously wasteful of robot labor. The more advanced robots became, the more completely they protected humans from danger, and the fewer things humans were allowed to do for themselves. At the same time, humans made the problem worse by putting the superabundance of robot labor to work at the most meaningless and trivial of tasks. It was common to have one robot on hand to cook each meal of the day, or to have one robot in charge of selecting the wine for dinner, while another had as its sole duty the drawing of the cork. Even if a man had only one aircar, he was likely to have five or six robot pilots, each painted a different color, to insure the driver did not clash with the owner’s outfit.

Both humans and robots had tended to consider robots to be of very little value, with the result that robots were constantly being destroyed for the most pointless of reasons, protecting humans from dangers that could have easily been avoided.

Humans were in the process of being reduced to drones. They were unproductive and in large part utterly inactive. Robots did more and more of the work, and were regarded with less and less respect. Work itself was held in lower and lower esteem. Work was what robots did, and robots were lesser beings.