The key condition, however, was assuring “retaliation in kind.” In The Absolute Weapon, Brodie wrote that doing so meant keeping the atomic arsenal far away from the targets of an atomic attack. Through the late 1940s, Brodie had thought that the logical targets to attack would be a nation’s large cities, since the materials needed for making a bomb were too scarce for hitting much else. By 1952, it was clear that the United States would have an abundance of bombs and materials with which to manufacture more. But the strategic concepts from the brief period of atomic scarcity had struck, as if by analytic habit.
Brodie was hardly alone in his thinking. The U.S. Air Force was concentrating its air-defense weapons around the large cities of the northeast. When the Joint Chiefs contemplated the most likely targets of a Soviet attack against the United States, they listed the nation’s most critical industrial plants. When war planners thought about which targets to hit inside the Soviet Union, which routes the SAC bombers should fly and how they should be refueled, they always assumed that SAC would be able to use its entire force of bombers, minus a few that would probably break down or get shot down along the way. They always assumed—sometimes tacitly, often explicitly—that the United States would naturally get in the first blow.
But around the same time that Ernie Plesset, Charlie Hitch and Bernard Brodie were contemplating the implications of the H-bomb, another RAND analyst was mulling over a different set of questions. What if the Soviets got in the first blow? And what if instead of heading toward American cities, the Soviet bombers set out to destroy the American SAC force, which at the time was concentrated on a small number of air bases overseas? If the Soviets chose that tactic and succeeded, then the United States might not have very many weapons surviving the attack for “retaliation in kind,” meaning that the Kremlin might see some incentive—and some possibility of winning a nuclear war—in launching a surprise attack, à la Pearl Harbor, catching all of SAC on the ground. If they could pull off such an attack, then all previous calculations about the costs and risks of starting a nuclear war would be toppled completely and American security would suddenly appear to be in a grim state.
The analyst pondering all this was Albert J. Wohlstetter, a mathematical logician only recently hired as a consultant by the RAND economics division. Wohlstetter did not reach that conclusion, or even that particular formulation of the problem, all at once. It came through a series of reactions to other studies going on at RAND about the same time—a historical study by Wohlstetter’s wife, Roberta, in RAND’s social science division, some military applications of John von Neumann’s theory of games by the mathematics and economics divisions, and, especially, some work in an entirely new field called “systems analysis” being done by a RAND mathematician named Edwin Paxson.
Ed Paxson had come to RAND in 1947, invited by John Williams, who worked with him during World War II at the Applied Mathematics Panel in New York. Paxson was ingenious, rude, abrasive, a driven man, hardworking, hard-drinking, chain-smoking. A RAND analyst would typically brief his colleagues about a project on which he was working before writing it up as a formal report. It was great sport to sit around and blast holes in the briefer’s analysis, one of RAND’s favorite intellectual games. But Paxson was brutal at it. At one briefing, the poor object of his scorn and derision grew so nervous that he finally fainted.
At RAND, Paxson invented the term “systems analysis.” It differed from the “operational research” of World War II in one critical respect. An operational researcher answered the question: what is the best that can be done, given the following equipment having the following characteristics? The systems analyst, as Paxson conceived of the notion, would answer a more creative question: here is the mission that some weapon must accomplish—what kind of equipment, having what sorts of characteristics, would be best for the job? In short, the systems analyst becomes a military planner in his own right.
When Paxson first came to RAND, the management decided not to put him in any single division. Instead, they dubbed him ‘The Systems Analyst,” and allowed him to dream up his own projects and borrow assistants from any of the various departments. Paxson was the numbers-cruncher par excellence. He loved to devise and try to solve equations of gargantuan dimension, the more numbers and variables and mathematical complexities the better. His dream was to quantify every single factor of a strategic bombing campaign—the cost, weight and payload of each bomber, its distance from the target, how it should fly in formation with other bombers and their fighter escorts, their exact routing patterns, the refueling procedures, the rate of attrition, the probability that something might go wrong in each step along the way, the weight and accuracy of the bomb, the vulnerability of the target, the bomb’s “kill probability,” the routing of the planes back to their bases, the fuel consumed, and all extraneous phenomena such as the weather—and put them all into a single mathematical equation.
Paxson had boundless faith in the potential of systems analysis. For several years, his enthusiasm was contagious in many of the corridors at RAND. For an organization dominated by mathematicians, systems analysis appeared to be the way to get the scientific—the right—answer. Projects that involved no systems analysis, such as most of the work produced by the social science division, were looked down upon, considered interesting in a speculative sort of way at best.
However, there was one aspect of systems analysis that, some soon began to realize, considerably reduced its value as a panacea. It may have been more creative than operational research; but during World War II, OR analysts were continuously working with real combat data, altering their calculations and theories to be compatible with new facts. Yet there was, of course, no real combat data for World War III, the cosmic “nuclear exchange” that the systems analysts of RAND were examining. The numbers that fed into the equations came from speculation, theories, derivations of weapons tests results, sometimes from thin air—not from real war. Since they were analyzing primarily weapons of the future, not even something as simple but absolutely crucial to the analysis as the weapon’s price tag could be fully trusted; it could only be predicted, and then just roughly.
In 1949, Ed Paxson and another early systems analyst at RAND, Edward S. Quade, worked for many months on mathematical models of hypothetical air duels fought between fighter planes and bombers. After trudging through a tremendously large series of complicated equations, Paxson and Quade concluded that, with the right kind of fire-control systems, a fighter pilot could close in on a bomber at a certain optimal point, fire his weapon, and shoot the bomber out of the sky in six out of every ten confrontations. After doing these calculations, Paxson and Quade compared their findings with real combat data from World War II. They found that in those cases where the fighter and bomber were in roughly the same geometric position that Paxson and Quade figured would give the fighter a 60 percent probability of kill, the fighter pilot actually downed the bomber only 2 percent of the time. Why the huge difference between theoretical calculation and reality? They puzzled over this disparity for a few disturbing days, and finally conceded that a real pilot in a real airplane shooting real bullets does not so eagerly or easily close in on a big bomber. He takes a couple of shots perhaps, then veers off. Doing anything more would be too dangerous.