As always von Neumann’s vantage point was the algorithmic realm, the center of the sphere, from which opportunities open up in all directions. This was also the vantage point of Einstein, who famously refused to contemplate the empirical data until he had deduced and perfected the logical structure of his findings. Neither was religious in any traditional way, but both reflected the Jewish insight of monotheism: a universe ruled by a single mind lending it order and significance.
Heims explains von Neumann’s strategy: “It became [von Neumann’s] mathematical and scientific style to push the use of formal logic and mathematics to the very limit, even into domains others felt to be beyond their reach” regarding the empirical world, “probably even life and mind, as comprehensible in terms of abstract formal structure.”
Bottom-up induction, stemming from empirical measurements alone, occurs not at the center but on the surface of the sphere. Induction requires theories — every experiment entails a concept to guide it — but the theories at the heart of scientific progress are often unacknowledged and mostly undeveloped. This inductive approach has ruled much of late twentieth-century science, driving it to an inexorable instinct for the capillaries. The unacknowledged governing idea is that the smaller an entity, be it particle or string — and the larger and more costly the apparatus needed to conjure it up — the more important the entity is. By rejecting this approach von Neumann left as his greatest legacy the most ubiquitous, powerful, adaptable scientific “apparatus” humanity has ever known — and it made a new world.
Today, essentially every practical computer in the world is based on the “von Neumann architecture.” As early as 1943, he had declared himself “obscenely interested” in computing machines. He soon managed to transmit his obsession to the Manhattan Project, to missile research, to game theory, and to the modeling of economic activity. As he told a friend: “I am thinking about something much more important than bombs. I am thinking about computers.” What he was thinking would thrust mankind more deeply than ever before into the algorithmic realm, the computer era, the information age.
A crux of the information age is the law of separation: separation of logic from material substrate, content from conduit, algorithm from machine, genetic message from DNA molecule. In biology, Francis Crick dubbed this proposition the Central Dogma: information can flow from the genetic message to its embodiment in proteins — from word to flesh — but not in the other direction. In communications, any contrary flow of influence, from the physical carrier to the content of the message, is termed noise. The purpose of transmission is to eliminate or transcend it.
The governing scheme of all communication and computational systems is top-down. Applying to everything from the human body to the cosmos, hierarchical systems proceed from creative content through logical structure or algorithm, and then to the physical substrate or material embodiment, which is independent of the higher levels. The von Neumann architecture would be the expression in computer science of this hierarchy and separation.
Just as von Neumann insisted that the axiomatic content of quantum theory be separate from particular physical models, he resolved that his computing machines be independent of vacuum tubes or relays or magnetic domains or any other material embodiment. He wanted a general-purpose computing machine with a design so scalable and adaptable that it could survive the spiraling advance of the technology.
The crucial step in achieving adaptability was separation. Von Neumann would separate the physical memory from the physical processor and then keep both the data and, crucially, the software instructions in memory, fully abstracted from the “mechanics” of the processor. This separation distinguishes a general-purpose from a special-purpose computer.
A mechanical device physically embodies its algorithm, its “instruction set,” in the material form of the machine. But this embodiment makes the one captive to the other: one machine, one algorithm. A vivid example is a classic Swiss watch, a special-purpose computer that achieves its goal only by a fantastically precise mechanical rendering of a single algorithm. If computers were built like Swiss watches — a dead-end toward which computer science actually did proceed for a time — each one would be a multimillion-dollar device good for one and only one function.
By separating memory from processor, and maintaining the processor’s instruction set not in the mechanics of the device but in its fully abstract, algorithmic form as software in memory, a von Neumann machine would be able to perform an infinite number of algorithms or programs, ushering in the computer age.
Von Neumann was the first to see that “in a few years” it would be possible to create computing machines that could operate “a billion times faster” than existing technology. Vindicating his vision, the von Neumann architecture freed the industry from contriving an ever-changing panoply of special-purpose machines and enabled engineers to focus on building speed and capacity in devices, such as memories and microprocessors, whose essential designs have remained unchanged for decades.
The von Neumann machine assumed its first physical form at the Institute for Advanced Study (IAS) in Princeton, to which von Neumann moved in 1930 from the increasingly treacherous politics and parlous economics of Europe. With his mother and his two younger brothers, he had come to the United States after the death of his father in 1929. All around the globe, scientists used the von Neumann architecture, embodied in the IAS computer, as a model for their own machines. Expounded in a major paper penned in 1945, the von Neumann architecture provided the basis first for some 3 0 von Neumann machines following the specific “Princestitute” architecture and then supplied the essential logic for all the computers to come.
After World War II, the advisory committee of the Weizmann Institute in Rehovot, Israel, included both Albert Einstein and John von Neumann. At a meeting in July 1947, the presence of these contending masses in orbit at the pinnacles of their prestige must have palpably distended the geometry of the room.
The two men clashed on the issue of whether the incipient state of Israel could use what at that time was considered to be a giant computer. Its architecture would repeat the von Neumann design, created at the Institute for Advanced Study.
Einstein had long been happy to perform experiments that juggled whole universe in his head, while calling in associates for any necessary computing assistance. He could see no reason for the tiny embattled agricultural country to acquire a computing machine that could consume 20 percent of the Weizmann Institute’s annual budget.
“Who would use it?” he asked. “Who would maintain it?” He implied that the machine was a golden calf in the desert, suitable for worship by miscreant militarists and a distraction from the pure tablets of true science.
Igal Talmi, an Israeli nuclear physicist still at Weizmann, who pioneered a deeper understanding of the “shell” of the nucleus, still remembers the debate. Under Einstein’s influence, Talmi made two predictions about the WEIZAC (Weizmann Automatic Computer). The first was that “it could never be built because of the limitations of Israeli technology. The second was that if it worked it would be used only an hour a week or so.” Talmi was “very happy to be wrong on both points.”