"I know. Everybody's a little uptight right now."
"And why is that?"
"Because of what's going on here."
"And what is going on here?"
He stopped before a small cubicle on the other side of the room. "Julia couldn't tell you, because it was classified." He touched the door with a keycard. I said, "Classified? Medical imaging is classified?"
The door latch clicked open, and we went inside. The door closed behind us. I saw a table, two chairs, a computer monitor and a keyboard. Ricky sat down, and immediately started typing. "The medical imaging project was just an afterthought," he said, "a minor commercial application of the technology we are already developing."
"Uh-huh. Which is?"
"Military."
"Xymos is doing military work?"
"Yes. Under contract." He paused. "Two years ago, the Department of Defense realized from their experience in Bosnia that there was enormous value to robot aircraft that could fly overhead and transmit battlefield images in real time. The Pentagon knew that there would be more and more sophisticated uses for these flying cameras in future wars. You could use them to spot the locations of enemy troops, even when they were hidden in jungle or in buildings; you could use them to control laser-guided rocket fire, or to identify the location of friendly troops, and so on. Commanders on the ground could call up the images they wanted, in the spectra they wanted-visible, infrared, UV, whatever. Real-time imaging was going to be a very powerful tool in future warfare."
"Okay…"
"But obviously," Ricky said, "these robot cameras were vulnerable. You could shoot them down like pigeons. The Pentagon wanted a camera that couldn't be shot down. They imagined something very small, maybe the size of a dragonfly-a target too small to hit. But there were problems with power supply, with small control surfaces, and with resolution using such a small lens. They needed a bigger lens."
I nodded. "And so you thought of a swarm of nanocomponents."
"That's right." Ricky pointed to the screen, where a cluster of black spots wheeled and turned in the air, like birds. "A cloud of components would allow you to make a camera with as large a lens as you wanted. And it couldn't be shot down because a bullet would just pass through the cloud. Furthermore, you could disperse the cloud, the way a flock of birds disperses with a gunshot. Then the camera would be invisible until it re-formed again. So it seemed an ideal solution. The Pentagon gave us three years of DARPA funding."
"And?"
"We set out to make the camera. It was of course immediately obvious that we had a problem with distributed intelligence."
I was familiar with the problem. The nanoparticles in the cloud had to be endowed with a rudimentary intelligence, so that they could interact with each other to form a flock that wheeled in the air. Such coordinated activity might look pretty intelligent, but it occurred even when the individuals making up the flock were rather stupid. After all, birds and fish could do it, and they weren't the brightest creatures on the planet.
Most people watching a flock of birds or a school of fish assumed there was a leader, and that all the other animals followed the leader. That was because human beings, like most social mammals, had group leaders.
But birds and fish had no leaders. Their groups weren't organized that way. Careful study of flocking behavior-frame-by-frame video analysis-showed that, in fact, there was no leader. Birds and fish responded to a few simple stimuli among themselves, and the result was coordinated behavior. But nobody was controlling it. Nobody was leading it. Nobody was directing it.
Nor were individual birds genetically programmed for flocking behavior. Flocking was not hard-wired. There was nothing in the bird brain that said, "When thus-and-such happens, start flocking." On the contrary, flocking simply emerged within the group as a result of much simpler, low-level rules. Rules like, "Stay close to the birds nearest you, but don't bump into them." From those rules, the entire group flocked in smooth coordination. Because flocking arose from low-level rules, it was called emergent behavior. The technical definition of emergent behavior was behavior that occurred in a group but was not programmed into any member of the group. Emergent behavior could occur in any population, including a computer population. Or a robot population. Or a nanoswarm.
I said to Ricky, "Your problem was emergent behavior in the swarm?"
"Exactly."
"It was unpredictable?"
"To put it mildly."
In recent decades, this notion of emergent group behavior had caused a minor revolution in computer science. What that meant for programmers was that you could lay down rules of behavior for individual agents, but not for the agents acting together. Individual agents-whether programming modules, or processors, or as in this case, actual micro-robots-could be programmed to cooperate under certain circumstances, and to compete under other circumstances. They could be given goals. They could be instructed to pursue their goals with single-minded intensity, or to be available to help other agents. But the result of these interactions could not be programmed. It just emerged, with often surprising outcomes.
In a way this was very exciting. For the first time, a program could produce results that absolutely could not be predicted by the programmer. These programs behaved more like living organisms than man-made automatons. That excited programmers-but it frustrated them, too. Because the program's emergent behavior was erratic. Sometimes competing agents fought to a standstill, and the program failed to accomplish anything. Sometimes agents were so influenced by one another that they lost track of their goal, and did something else instead. In that sense the program was very childlike-unpredictable and easily distracted. As one programmer put it, "Trying to program distributed intelligence is like telling a five-year-old kid to go to his room and change his clothes. He may do that, but he is equally likely to do something else and never return."
Because these programs behaved in a lifelike way, programmers began to draw analogies to the behavior of real organisms in the real world. In fact, they began to model the behavior of actual organisms as a way to get some control over program outcomes. So you had programmers studying ant swarming, or termite mounding, or bee dancing, in order to write programs to control airplane landing schedules, or package routing, or language translation. These programs often worked beautifully, but they could still go awry, particularly if circumstances changed drastically. Then they would lose their goals. That was why I began, five years ago, to model predator-prey relationships as a way to keep goals fixed. Because hungry predators weren't distracted. Circumstances might force them to improvise their methods; and they might try many times before they succeeded-but they didn't lose track of their goal.
So I became an expert in predator-prey relationships. I knew about packs of hyenas, African hunting dogs, stalking lionesses, and attacking columns of army ants. My team had studied the literature from the field biologists, and we had generalized those findings into a program module called PREDPREY, which could be used to control any system of agents and make its behavior purposeful. To make the program seek a goal.
Looking at Ricky's screen, the coordinated units moving smoothly as they turned through the air, I said, "You used PREDPREY to program your individual units?"
"Right. We used those rules."
"Well, the behavior looks pretty good to me," I said, watching the screen. "Why is there a problem?"
"We're not sure."
"What does that mean?"
"It means we know there's a problem, but we're not sure what's causing it. Whether the problem is programming-or something else."