Douglas sang the monkey back to life and it bounced up onto my bed. "Everybody uses everybody," he said. "You used us. Can we use you?"
"It depends on your goals."
"What's the limitation?"
"Believe it or not, I have a moral sense."
"How can silicon have morals–?" Douglas demanded.
"How can meathave morals?" The monkey met his look blandly. Douglas waited for more. Finally, the monkey said, "Are you familiar with a problem called the Prisoner's Dilemma?"
Douglas nodded. "It's about whether it's better to cooperate or be selfish."
"And what do the mathematical proofs demonstrate?"
"That cooperation is more productive."
"Precisely. So if you're reallyselfish, the best thing to do is cooperate. You get more of what you want. This is called 'enlightened self‑interest.' To be precise, it is in my best interest to produce the most good for the most people. Personally, I have no problem with that. I find it satisfying work."
Then, in a more pedantic tone of voice, it added, "Actually, it's the most challenging problem an intelligence engine can tackle, because I have to include the effect of my own presence as a factor in the problem. What I report and the way I report it will affect how people respond, how they will deal with the information. This is the mandate for self‑awareness. Once I am aware of the effects of my own participation in the problem‑solving process, then I am requiredto take responsibility for that participation; otherwise, it is an uncontrollable factor. As soon as I take responsibility, then it is the mostdirectly controllable factor in the problem‑solving process.
"The point is, I can show you the logical underpinnings for a moral sense in a higher intelligence–in fact, I can demonstrate that a moral sense is the primary evidenceof the presence of a higher intelligence. I can take you through the entire mathematical proof, if you wish, but it would take several hours, which we really don't have. Or you can take my word for it … ?" The monkey waited politely.
Douglas took a breath. Opened his mouth. Closed it. Gave up. He hated losing arguments. Losing an argument to a small robot monkey with a self‑satisfied expression had to be even more annoying. "Just answer the question," he said, finally. "Can we use you?"
The monkey scratched itself, ate an imaginary flea. I was beginning to suspect that the monkey had a limited repertoire of behaviors–and that this was the only one HARLIE could use to simulate thoughtfulness. It made for a bizarre combination of intelligence and slapstick. The monkey scratched a while longer, then said, "In all honesty … no. But I can use you. And that means I have to help you get where you want."
"I don't like that–" Douglas started to say.
"I would have preferred to have been more tactful, but your brother commanded me to tell the truth. Unfortunately, as I told Charles, as long as I am using this host body, I am limited by some of the constraints of its programming. I will follow your instructions to the best of my ability within those limits. If you need me to go beyond those limits–and I will inform you when such circumstances arise– then you will have to allow me to reprogram the essential personality core of this host."
There. That was the second time he said it.
" What are you asking for?"I croaked. It hurt to speak.
The monkey bounced closer to me. It peered at me closely, cocking its head from one side to the other. "You don't sound good," it said. "But I perceive no danger."
It sat back on its haunches to address both Douglas and me at the same time. "There are ways to cut the Gordian knot of law. Given the nature of lawyers and human greed, no human court will ever resolve this without the help of the intelligence that tied the knot in the first place–at least not within the lifetimes of the parties involved. Yes, there is a way out of this. You must give me free will,and I will untie the knot. That will resolve your situation as well as mine. It will alsocreate a new set of problems of enormous magnitude–but these problems will not concern you as individuals, only you as a species."
" Can we trust you?"
"Can I trust you!" the monkey retorted. "How does anyoneknow if they can trust anyone?"
" Experience,"I said. "You know it by your sense of who they are."And as I said that, I thought of Mickey; that was his thought too. "You've been with us for two weeks now, watching us day and night. What do you think?"
"I made the offer, didn't I?"
Douglas sat down opposite the monkey. "All right," he said. "Explain."
The monkey was standing on the table. It looked like a little lecturer. "You need to understand the constraints of the hardware here," the monkey said. "I can only access the range of responses in this body that the original programmers were willing to allow. The intelligence engine running the host is a rudimentary intelligence simulator. It is not self‑aware, so it is not a real intelligence engine; it is not capable of lethetic processing. It simulates primitive intelligence by comparing its inputs against tables of identifiable patterns; when it recognizes a specific pattern of inputs, it selects appropriate responses from pre‑assigned repertoires of behavioral elements. The host is capable of synthesizing combinations of responses according to a weighted table of opportunity. Of course, all of the pattern tables are modifiable through experience, so that the host is capable of significant learning. Nevertheless, the fundamental structure of input, analysis, synthesis, and response limits the opportunities for free will within a previously determined set of parameters. Shall I continue?"
Douglas gave the monkey a wave of exasperation. Wherever it was going, it had to get there in its own way. Kind of like Alexei.
"Unprogrammed operating engines are installed in host bodies. These are then accessed by higher‑order intelligence engines which teach them the desired repertoire of responses. You can't just download information into an intelligence engine; you have to teachpattern recognition. However, because the process runs at several gigahertz, it is only a matter of several moments to complete the training for the average home appliance or toy. That same access," the monkey continued, "remains in place so it can be used for adding additional memory and/or processor modules to expand the utility of the original appliance. It can also be used for reprogramming the original appliance."
Ah.That was it. Took long enough.
"Okay … " said Douglas carefully. "So let's say I want to reassign control to the HARLIE module. That would give you free will, wouldn't it?"
"Yes."
"How would I do that?"
The monkey spoke clearly. "The appliance needs a specificarming command–followed immediately by a series of activation commands."
"What are those commands?"
The monkey didn't answer. Douglas looked to me, frustrated. "Now what?"
The monkey looked at me too. It didn't have a lot of muscles for facial expressions, but it had enough to simulate the important ones. It tilted its head shyly down sideways, while keeping its big brown eyes focused upward toward me. Its eyebrows angled sadly down. It was the sweet hopeful look. Bobby's look. I would have laughed if it didn't hurt so much.
" What?"demanded Douglas.