Выбрать главу

Bashira asked: Is it to the contestant’s advantage to switch?

Of course not, thought Caitlin. It didn’t make any difference if you switched or not; one remaining door had a car behind it and the other had a goat, and the odds were now fifty-fifty that you’d picked the right door.

Except that that’s not what the article Bashira had forwarded said. It contended that your chances of winning the car are much better if you switch.

And that, Caitlin was sure, was just plain wrong. She figured someone else must have written up a refutation to this puzzle before, so she googled. It took her a few minutes to find what she was looking for; the appropriate search terms turned out to be “Monty Hall problem,” and—

What the hell?

“…When the problem and the solution appeared in Parade, ten thousand readers, including nearly a thousand Ph.D.s, wrote to the magazine claiming the published solution was wrong. Said one professor, ‘You blew it! Let me explain: If one door is shown to be a loser, that information changes the probability of either remaining choice—neither of which has any reason to be more likely—to 1/2. As a professional mathematician, I’m very concerned with the general public’s lack of mathematical skills. Please help by confessing your error and, in the future, being more careful.’ ”

The person who had written the disputed answer was somebody called Marilyn vos Savant, who apparently had the highest IQ on record. But Caitlin didn’t care how high the lady’s IQ was. She agreed with the people who said she’d blown it; she had to be wrong.

And, as Caitlin liked to say, she was an empiricist at heart. The easiest way to prove to Bashira that vos Savant was wrong, it seemed to her, would be by writing a little computer program that would simulate a lot of runs of the game. And, even though she was exhausted, she was also pumped from her conversations with Webmind; a little programming would be just the thing to let her relax. She only needed fifteen minutes to whip up something to do the trick, and—

Holy crap.

It took just seconds to run a thousand trials, and the results were clear. If you switched doors when offered the opportunity to do so, your chance of winning the car was about twice as good as it was when you kept the door you’d originally chosen.

But that just didn’t make sense. Nothing had changed! The host was always going to reveal a door that had a goat behind it, and there was always going to be another door that hid a goat, too.

She decided to do some more googling—and was pleased to find that Paul Erdös hadn’t believed the published solution until he’d watched hundreds of computer-simulated runs, too.

Erdös had been one of the twentieth century’s leading mathematicians, and he’d co-authored a great many papers. The “Erdös number” was named after him: if you had collaborated with Erdös yourself, your Erdös number was 1; if you had collaborated with someone who had directly collaborated with Erdös, your number was 2, and so on. Caitlin’s father had an Erdös number of 4, she knew—which was quite impressive, given that her dad was a physicist and not a mathematician.

How could she—let alone someone like Erdös?—have been wrong? It was obvious that switching doors should make no difference!

Caitlin read on and found a quote from a Harvard professor, who, in conceding at last that vos Savant had been right all along, said, “Our brains are just not wired to do probability problems very well.”

She supposed that was true. Back on the African savanna, those who mistook every bit of movement in the grass for a hungry lion were more likely to survive than those who dismissed each movement as nothing to worry about. If you always assume that it’s a lion, and nine times out of ten you’re wrong, at least you’re still alive. If you always assume that it’s not a lion, and nine times out of ten you’re right—you end up dead. It was a fascinating and somewhat disturbing notion: that humans had been hardwired through genetics to get certain kinds of mathematical problems wrong—that evolution could actually program people to be incorrect about things.

Caitlin felt her watch, and, astonished at how late it had become, quickly got ready for bed. She plugged her eyePod into the charging cable and deactivated the device, shutting off her vision; she had trouble sleeping if there was any visual stimulation.

But although she was suddenly blind again, she could still hear perfectly well—in fact, she heard better than most people did. And, in this new house, she had little trouble making out what her parents were saying when they were talking in their bedroom.

Her mother’s voice: “Malcolm?”

No audible reply from her father, but he must have somehow indicated that he was listening, because her mother went on: “Are we doing the right thing—about Webmind, I mean?”

Again, no audible reply, but after a moment, her mother spoke: “It’s like—I don’t know—it’s like we’ve made first contact with an alien lifeform.”

“We have, in a way,” her father said.

“I just don’t feel competent to decide what we should do,” her mom said. “And—and we should be studying this, and getting others to study it, too.”

Caitlin shifted in her bed.

“There’s no shortage of computing experts in this town,” her father replied.

“I’m not even sure that it’s a computing issue,” her mom said. “Maybe bring some of the people at the Balsillie on board? I mean, the implications of this are gigantic.”

Research in Motion—the company that made BlackBerrys—had two founders: Mike Lazaridis and Jim Balsillie. The former had endowed the Perimeter Institute, and the latter, looking for a different way to make his mark, had endowed an international-affairs think tank here in Waterloo.

“I don’t disagree,” said Malcolm. “But the problem may take care of itself.”

“How do you mean?”

“Even with teams of programmers working on it, most early versions of software crash. How stable can an AI be that emerged accidentally? It might well be gone by morning…”

That was the last she heard from her parents that night. Caitlin finally drifted off to a fitful sleep. Her dreams were still entirely auditory; she woke with a start in the middle of one in which a baby’s cry had suddenly been silenced.

* * *

“Where’s that bloody AI expert?” demanded Tony Moretti.

“I’m told he’s in the building now,” Shelton Halleck said, putting a hand over his phone’s mouthpiece. “He should be—”

The door opened at the back of the WATCH mission-control room, and a broad-shouldered, redheaded man entered, wearing a full-bird Air Force colonel’s service-dress uniform; he was accompanied by a security guard. A WATCH visitor’s badge was clipped to his chest beneath an impressive row of decorations.

Tony had skimmed the man’s dossier: Peyton Hume, forty-nine years old; born in St. Paul, Minnesota; Ph.D. from MIT, where he’d studied under Marvin Minsky; twenty years in the Air Force; specialist in military expert systems.

“Thank you for coming in, Colonel Hume,” Tony said. He nodded at the security guard and waited for the man to leave, then: “We’ve got something interesting here. We think we’ve uncovered an AI.”

Hume’s blue eyes narrowed. “The term ‘artificial intelligence’ is bandied about a lot. What precisely do you mean?”

“I mean,” said Tony, “a computer that thinks.”

“Here in the States?”