Выбрать главу

78886090522101180541172856528278622 96732064351090230047702789306640625

(we've broken the number in two so that it fits the page width) which has 70 digits. It took a computer algebra system about five seconds to work that out, by the way, and about 4.999 of those seconds were taken up with giving it the instructions. And most of the rest was used up printing the result to the screen. Anyway, you now see why combinatorics is the art of counting without actually counting; if you listed all the possibilities and counted them '1, 2, 3, 4 ...' you'd never finish. So it's a good job that the university administrator wasn't in charge of car parking.

How big is L-space? The Librarian said it is infinite, which is true if you used infinity to mean 'a much larger number than I can envisage' or if you don't place an upper limit on how big a book can be,14 or if you allow all possible alphabets, syllabaries, and pictograms. If stick to 'ordinarysized'

English books, we can reduce the estimate. A typical book is 100,000 words long, or about

600,000 characters (letters and spaces, we'll ignore punctuation marks). There are 26 letters in the English alphabet, plus a space, making 27 characters that can go into each of the 600,000 possible positions. The counting principle that we used to solve the car-parking problem now implies that the maximum number of books of this length is 27600,000, which is roughly 10860,000

(that is, an 860,000-digit number). Of course, most those 'books' make very little sense, because we've not yet insisted that the letters make sensible words. If we assume that the words are drawn from a list of 10,000 standard ones, and calculate the number of ways to arrange 100,000 words in order, then the figure changes. 10,000100,000 is equal to 10400,000, and this is quite a bit smaller ... but still enormous. Mind you, most of those books wouldn't make much sense either; they'd read something like 'Cabbage patronymic forgotten prohibit hostile quintessence'

continuing at book length.15 So maybe we ought to work with sentences ... At any rate, even if we cut the numbers down in that manner, it turns out that the universe is not big enough to contain that many physical books. So it's a good job that L-space is available, and now we know why there's never enough shelf space. We like to think that our major libraries, such as the British Library or the Library of Congress, are pretty big. But, in fact, the space of those books that actually exist is a tiny, tiny fraction of L-space, all the books that could have existed. In particular, we're never going to run out of new books to write.

Poincare's phase space viewpoint has proved to be so useful that nowadays you'll find it in every area of science -and in areas that aren't science at all. A major consumer of phase spaces is economics. Suppose that a national economy involves a million different goods -cheese, bicycles, rats-on-a-stick, and so on. Associated with each good is a price, say .2.35 for a lump of cheese, .449.99 for a bicycle, .15.00 for a rat-on-a-stick. So the state of the economy is a list of one million numbers. The phase space consists of all possible lists of a million numbers, including many lists that make no economic sense at all, such as lists that include the .0.02 bicycle or the .999,999,999.95 rat. The economist's job is to discover the principles that select, from the space of all possible lists of numbers, the actual list that is observed.

The classic principle of this kind is the Law of Supply and Demand, which says that if goods are in short supply and you really, really want them, then the price goes up. It sometimes works, but it often doesn't. Finding such laws is something of a black art, and the results are not totally convincing, but that just tells us that economics is hard. Poor results notwithstanding, the economist's way of thinking is a phase space point of view.

Here's a little tale that shows just how far removed economic theory is from reality. The basis of conventional economics is the idea of a rational agent with perfect information, who maximises utility. According to these assumptions, a taxi-driver, for example, will arrange his activities to generate the most money for the least effort.

Now, the income of a taxi-driver depends on circumstances. On good days, with lots of passengers around, he will do well; on bad days, he won't. A rational taxi-driver will therefore work longer on good days and give up early on bad ones. However, a study of taxi-drivers in New York carried out by Colin Camerer and others shows the exact opposite. The taxi-drivers seem to set themselves a daily target, and stop working once they reach it. So they work shorter hours on good days, and longer hours on bad ones. They could increase their earnings by 8 per cent just by working the same number of hours every day, for the same total working time. If they worked longer on good days and shorter on bad ones, they could increase their earnings by

15 per cent. But they don't have a good enough intuition for economic phase space to appreciate this. They are adopting a common human trait of placing too much value on what they have today, and too little on what they may gain tomorrow.

Biology, too, has been invaded by phase spaces. The first of these to gain widespread currency was DNA-space. Associated with every living organism is its genome, a string of chemical molecules called UNA. The DNA molecule is a double helix, two spirals wrapped round a common core. Each spiral is made up of a string of 'bases' or 'nucleotides', which come in four varieties: cytosine, guanine, adenine, thymine, normally abbreviated to their initials C, G, A, T.

The sequences on the two strings are 'complementary': wherever C appears on one string, you get G on the other, and similarly for A and T. the DNA contains two copies of the sequence, one positive and on negative, so to speak. In the abstract, then, the genome can be thought of as a single sequence of these four letters, something like AATG GCCTCAG ... going on for rather a long time. The human genome for example, goes on for about three billion letters.

The phase space for genomes, DNA-space, consists of all possible sequences of a given length. If we're thinking about human beings the relevant DNA-space comprises all possible sequences of three billion code letters C, G, A, T How big is that space? It's the same problem as the cars in the car park, mathematically speaking, so the answer is 4x4x4x...x4 with three billion 4s. That is

43,000,000,000

. This number is a lot bigger than the 70-digit number we got for the car-parking problem. It's a lot bigger than L-space for normal-size books, too. In fact, it has about

1,800,000,000 digits. If you wrote it out with 3,000 digits per page, you'd need a 600,000-page book to hold it.

The image of DNA-space is very useful for geneticists who are considering possible changes to DNA sequences, such as 'point mutations' where one code letter is changed, say as the result of a copying error. Or an incoming high-energy cosmic ray. Viruses, in particular, mutate so rapidly that it makes little sense to talk of a viral species as a fixed thing. Instead, biologists talk of quasi-species, and visualise these as clusters of related sequences in DNA-space. The clusters slosh around as time passes, but they stay together as one cluster, which allows the virus to retain its identity.

In the whole of human history, the total number of people has been no more than ten billion, a mere 11-digit number. This is an incredibly tiny fraction of all those possibilities. So actual human beings have explored the tiniest portion of DNA-space, just as actual books have explored the tiniest portion of L-space. Of course, the interesting questions are not as straightforward as that. Most sequences of letters do not make up a sensible book; most DNA