Ontological and epistemological arguments are never easily settled. However, "Artificial Life," whether it fully deserves that term or not, is at least easy to see, and rather easy to get your hands on. "Blind Watchmaker" is the A-Life equivalent of using one's computer as a home microscope and examining pondwater. Best of all, the program costs only twelve bucks! It's cheap and easy to become an amateur A-Life naturalist.
Because of the ubiquity of powerful computers, A-Life is "garage-band science." The technology's out there for almost anyone interested -- it's hacker-science. Much of A-Life practice basically consists of picking up computers, pointing them at something promising, and twiddling with the focus knobs until you see something really gnarly. *Figuring out what you've seen* is the tough part, the "real science"; this is where actual science, reproducible, falsifiable, formal, and rigorous, parts company from the intoxicating glamor of the intellectually sexy. But in the meantime, you have the contagious joy and wonder of just *gazing at the unknown* the primal thrill of discovery and exploration.
A lot has been written already on the subject of Artificial Life. The best and most complete journalistic summary to date is Steven Levy's brand-new book, ARTIFICIAL LIFE: THE QUEST FOR A NEW CREATION (Pantheon Books 1992).
The easiest way for an interested outsider to keep up with this fast-breaking field is to order books, videos, and software from an invaluable catalog: "Computers In Science and Art," from Media Magic. Here you can find the Proceedings of the first and second Artificial Life Conferences, where the field's most influential papers, discussions, speculations and manifestos have seen print.
But learned papers are only part of the A-Life experience. If you can see Artificial Life actually demonstrated, you should seize the opportunity. Computer simulation of such power and sophistication is a truly remarkable historical advent. No previous generation had the opportunity to see such a thing, much less ponder its significance. Media Magic offers videos about cellular automata, virtual ants, flocking, and other A-Life constructs, as well as personal software "pocket worlds" like CA Lab, Sim Ant, and Sim Earth. This very striking catalog is available free from Media Magic, P.O Box 507, Nicasio CA 94946.
"INTERNET" [aka "A Short History of the Internet"]
Some thirty years ago, the RAND Corporation, America's foremost Cold War think-tank, faced a strange strategic problem. How could the US authorities successfully communicate after a nuclear war?
Postnuclear America would need a command-and-control network, linked from city to city, state to state, base to base. But no matter how thoroughly that network was armored or protected, its switches and wiring would always be vulnerable to the impact of atomic bombs. A nuclear attack would reduce any conceivable network to tatters.
And how would the network itself be commanded and controlled? Any central authority, any network central citadel, would be an obvious and immediate target for an enemy missile. The center of the network would be the very first place to go.
RAND mulled over this grim puzzle in deep military secrecy, and arrived at a daring solution. The RAND proposal (the brainchild of RAND staffer Paul Baran) was made public in 1964. In the first place, the network would *have no central authority.* Furthermore, it would be *designed from the beginning to operate while in tatters.*
The principles were simple. The network itself would be assumed to be unreliable at all times. It would be designed from the get-go to transcend its own unreliability. All the nodes in the network would be equal in status to all other nodes, each node with its own authority to originate, pass, and receive messages. The messages themselves would be divided into packets, each packet separately addressed. Each packet would begin at some specified source node, and end at some other specified destination node. Each packet would wind its way through the network on an individual basis.
The particular route that the packet took would be unimportant. Only final results would count. Basically, the packet would be tossed like a hot potato from node to node to node, more or less in the direction of its destination, until it ended up in the proper place. If big pieces of the network had been blown away, that simply wouldn't matter; the packets would still stay airborne, lateralled wildly across the field by whatever nodes happened to survive. This rather haphazard delivery system might be "inefficient" in the usual sense (especially compared to, say, the telephone system) -- but it would be extremely rugged.
During the 60s, this intriguing concept of a decentralized, blastproof, packet-switching network was kicked around by RAND, MIT and UCLA. The National Physical Laboratory in Great Britain set up the first test network on these principles in 1968. Shortly afterward, the Pentagon's Advanced Research Projects Agency decided to fund a larger, more ambitious project in the USA. The nodes of the network were to be high-speed supercomputers (or what passed for supercomputers at the time). These were rare and valuable machines which were in real need of good solid networking, for the sake of national research-and-development projects.
In fall 1969, the first such node was installed in UCLA. By December 1969, there were four nodes on the infant network, which was named ARPANET, after its Pentagon sponsor.
The four computers could transfer data on dedicated high- speed transmission lines. They could even be programmed remotely from the other nodes. Thanks to ARPANET, scientists and researchers could share one another's computer facilities by long-distance. This was a very handy service, for computer-time was precious in the early '70s. In 1971 there were fifteen nodes in ARPANET; by 1972, thirty-seven nodes. And it was good.
By the second year of operation, however, an odd fact became clear. ARPANET's users had warped the computer-sharing network into a dedicated, high-speed, federally subsidized electronic post- office. The main traffic on ARPANET was not long-distance computing. Instead, it was news and personal messages. Researchers were using ARPANET to collaborate on projects, to trade notes on work, and eventually, to downright gossip and schmooze. People had their own personal user accounts on the ARPANET computers, and their own personal addresses for electronic mail. Not only were they using ARPANET for person-to-person communication, but they were very enthusiastic about this particular service -- far more enthusiastic than they were about long-distance computation.
It wasn't long before the invention of the mailing-list, an ARPANET broadcasting technique in which an identical message could be sent automatically to large numbers of network subscribers. Interestingly, one of the first really big mailing-lists was "SF- LOVERS," for science fiction fans. Discussing science fiction on the network was not work-related and was frowned upon by many ARPANET computer administrators, but this didn't stop it from happening.
Throughout the '70s, ARPA's network grew. Its decentralized structure made expansion easy. Unlike standard corporate computer networks, the ARPA network could accommodate many different kinds of machine. As long as individual machines could speak the packet-switching lingua franca of the new, anarchic network, their brand-names, and their content, and even their ownership, were irrelevant.