Выбрать главу

"I can run a second version of you, entirely out of harm's way. I can give you a kind of insurance -- against an anti-Copy backlash . . . or a meteor strike . . . or whatever else might go wrong."

Thomas was momentarily speechless. The subject wasn't entirely taboo, but he couldn't recall anyone raising it quite so bluntly before. He recovered swiftly. "I have no wish to run a second version, thank you. And . . . what do you mean, "out of harm's way"? Where's your invulnerable computer going to be? In orbit? Up where it would only take a pebble-sized meteor to destroy it, instead of a boulder?"

"No, not in orbit. And if you don't want a second version, that's fine. You could simply move."

"Move where? Underground? To the bottom of the ocean? You don't even know where this office is being implemented, do you? What makes you think you can offer a superior site -- for such a ridiculous price -- when you don't have the faintest idea how secure I am already?" Thomas was growing disappointed, and uncharacteristically irritable. "Stop making these inflated claims, and get to the point. What are you selling?"

Durham shook his head apologetically. "I can't tell you that. Not yet. If I tried to explain it, out of the blue, it would make no sense. You have to do something first. Something very simple."

"Yes? And what's that?"

"You have to conduct a small experiment."

Thomas scowled. "What kind of experiment? Why?"

And Durham -- the software puppet, the lifeless shell animated by a being from another plane -- looked him in the eye and said, "You have to let me show you exactly what you are."

3

(Rip, tie, cut toy man)

JUNE 2045

Paul -- or the flesh-and-blood man whose memories he'd inherited -- had traced the history of Copies back to the turn of the century, when researchers had begun to fine-tune the generic computer models used for surgical training and pharmacology, transforming them into customized versions able to predict the needs and problems of individual patients. Drug therapies were tried out in advance on models which incorporated specific genetic and biochemical traits, allowing doses to be optimized and any idiosyncratic side-effects anticipated and avoided. Elaborate operations were rehearsed and perfected in Virtual Reality, on software bodies with anatomical details -- down to the finest capillaries -- based on the flesh-and-blood patient's tomographic scans.

These early models included a crude approximation of the brain, perfectly adequate for heart surgery or immunotherapy -- and even useful to a degree when dealing with gross cerebral injuries and tumours -- but worthless for exploring more subtle neurological problems.

Imaging technology steadily improved, though -- and by 2020, it had reached the point where individual neurons could be mapped, and the properties of individual synapses measured, non-invasively. With a combination of scanners, every psychologically relevant detail of the brain could be read from the living organ -- and duplicated on a sufficiently powerful computer.

At first, only isolated neural pathways were modeled: portions of the visual cortex of interest to designers of machine vision, or sections of the limbic system whose role had been in dispute. These fragmentary neural models yielded valuable results, but a functionally complete representation of the whole organ -- embedded in a whole body -- would have allowed the most delicate feats of neurosurgery and psychopharmacology to be tested in advance. For several years, though, no such model was built -- in part, because of a scarcely articulated unease at the prospect of what it would mean. There were no formal barriers standing in the way -- government regulatory bodies and institutional ethics committees were concerned only with human and animal welfare, and no laboratory had yet been fire-bombed by activists for its inhumane treatment of physiological software -- but still, someone had to be the first to break all the unspoken taboos.

Someone had to make a high-resolution, whole-brain Copy -- and let it wake, and talk.

In 2024, John Vines, a Boston neurosurgeon, ran a fully conscious Copy of himself in a crude Virtual Reality. Taking slightly less than three hours of real time (pulse racing, hyper-ventilating, stress hormones elevated), the first Copy's first words were: "This is like being buried alive. I've changed my mind. Get me out of here."

His original obligingly shut him down -- but then later repeated the demonstration several times, without variation, reasoning that it was impossible to cause additional distress by running exactly the same simulation more than once.

When Vines went public, the prospects for advancing neurological research didn't rate a mention; within twenty-four hours -- despite the Copy's discouraging testimony -- the headlines were all immortality, mass migration into Virtual Reality, and the imminent desertion of the physical world.

Paul was twenty-four years old at the time, with no idea what to make of his life. His father had died the year before -- leaving him a modest business empire, centered on a thriving retail chain, which he had no interest in managing. He'd spent seven years traveling and studying -- science, history and philosophy -- doing well enough at everything he tried, but unable to discover anything that kindled real intellectual passion. With no struggle for financial security ahead, he'd been sinking quietly into a state of bemused complacency.

The news of John Vines's Copy blasted away his indifference. It was as if every dubious promise technology had ever made to transform human life was about to be fulfilled, with a vengeance. Longevity would only be the start of it; Copies could evolve in ways almost impossible for organic beings: modifying their minds, redefining their goals, endlessly transmuting themselves. The possibilities were intoxicating -- even as the costs and drawbacks of the earliest versions sank in, even as the inevitable backlash began, Paul was a child of the millennium; he was ready to embrace it all.

But the more time he spent contemplating what Vines had done, the more bizarre the implications seemed to be.

The public debate the experiment had triggered was heated, but depressingly superficial. Decades-old arguments raged again over just how much computer programs could ever have in common with human beings (psychologically, morally, metaphysically, information-theoretically . . . ) and even whether or not Copies could be "truly" intelligent, "truly" conscious. As more workers repeated Vines's result, their Copies soon passed the Turing test: no panel of experts quizzing a group of Copies and humans -- by delayed video, to mask the time-rate difference -- could tell which were which. But some philosophers and psychologists continued to insist that this demonstrated nothing more than "simulated consciousness," and that Copies were merely programs capable of faking a detailed inner life which didn't actually exist at all.

Supporters of the Strong AI Hypothesis insisted that consciousness was a property of certain algorithms -- a result of information being processed in certain ways, regardless of what machine, or organ, was used to perform the task. A computer model which manipulated data about itself and its "surroundings" in essentially the same way as an organic brain would have to possess essentially the same mental states. "Simulated consciousness" was as oxymoronic as "simulated addition."

Opponents replied that when you modeled a hurricane, nobody got wet. When you modeled a fusion power plant, no energy was produced. When you modeled digestion and metabolism, no nutrients were consumed -- no real digestion took place. So, when you modeled the human brain, why should you expect real thought to occur? A computer running a Copy might be able to generate plausible descriptions of human behavior in hypothetical scenarios -- and even appear to carry on a conversation, by correctly predicting what a human would have done in the same situation -- but that hardly made the machine itself conscious.