Выбрать главу

It was just such a mistaken, self-evident assumption that caused geometry itself to be mis-classified as a branch of mathematics for over two millennia, from about 300 BC when Euclid wrote his Elements, to the nineteenth century (and indeed in most dictionaries and schoolbooks to this day). Euclidean geometry formed part of every mathematician’s intuition. Eventually some mathematicians began to doubt that one in particular of Euclid’s axioms was self-evident (the so-called ‘parallel axiom’). They did not, at first, doubt that this axiom was true. The great German mathematician Carl Friedrich Gauss is said to have been the first to put it to the test. The parallel axiom is required in the proof that the angles of a triangle add up to 180°. Legend has it that, in the greatest secrecy (for fear of ridicule), Gauss placed assistants with lanterns and theodolites at the summits of three hills, the vertices of the largest triangle he could conveniently measure. He detected no deviation from Euclid’s predictions, but we now know that that was only because his instruments were not sensitive enough. (The vicinity of the Earth happens to be rather a tame place geometrically.) Einstein’s general theory of relativity included a new theory of geometry that contradicted Euclid’s and has been vindicated by experiment. The angles of a real triangle really do not necessarily add up to 180°: the true total depends on the gravitational field within the triangle.

A very similar mis-classification has been caused by the fundamental mistake that mathematicians since antiquity have been making about the very nature of their subject, namely that mathematical knowledge is more certain than any other form of knowledge. Having made that mistake, one has no choice but to classify proof theory as part of mathematics, for a mathematical theorem could not be certain if the theory that justifies its method of proof were itself uncertain. But as we have just seen, proof theory is not a branch of mathematics — it is a science. Proofs are not abstract. There is no such thing as abstractly proving something, just as there is no such thing as abstractly calculating or computing something. One can of course define a class of abstract entities and call them ‘proofs’, but those ‘proofs’ cannot verify mathematical statements because no one can see them. They cannot persuade anyone of the truth of a proposition, any more than an abstract virtual-reality generator that does not physically exist can persuade people that they are in a different environment, or an abstract computer can factorize a number for us. A mathematical ‘theory of proofs’ would have no bearing on which mathematical truths can or cannot be proved in reality, just as a theory of abstract ‘computation’ has no bearing on what mathematicians — or anyone else — can or cannot calculate in reality, unless there is a separate, empirical reason for believing that the abstract ‘computations’ in the theory resemble real computations. Computations, including the special computations that qualify as proofs, are physical processes. Proof theory is about how to ensure that those processes correctly mimic the abstract entities they are intended to mimic.

Gödel’s theorems have been hailed as ‘the first new theorems of pure logic for two thousand years’. But that is not so: Gödel’s theorems are about what can and cannot be proved, and proof is a physical process. Nothing in proof theory is a matter of logic alone. The new way in which Gödel managed to prove general assertions about proofs depends on certain assumptions about which physical processes can or cannot represent an abstract fact in a way that an observer can detect and be convinced by. Gödel distilled such assumptions into his explicit and tacit justification of his results. His results were self-evidently justified, not because they were ‘pure logic’ but because mathematicians found the assumptions self-evident.

One of Gödel’s assumptions was the traditional one that a proof can have only a finite number of steps. The intuitive justification of this assumption is that we are finite beings and could never grasp a literally infinite number of assertions. This intuition, by the way, caused many mathematicians to worry when, in 1976, Kenneth Appel and Wolfgang Haken used a computer to prove the famous ‘four-colour conjecture’ (that using only four different colours, any map drawn in a plane can be coloured so that no two adjacent regions have the same colour). The program required hundreds of hours of computer time, which meant that the steps of the proof, if written down, could not have been read, let alone recognized as self-evident, by a human being in many lifetimes. ‘Should we take the computer’s word for it that the four-colour conjecture is proved?’, the sceptics wondered — though it had never occurred to them to catalogue all the firings of all the neurons in their own brains when they accepted a relatively ‘simple’ proof.

The same worry may seem more justified when applied to a putative proof with an infinite number of steps. But what is a ‘step’, and what is ‘infinite’? In the fifth century BC Zeno of Elea concluded, on the basis of a similar intuition, that Achilles will never overtake the tortoise if the tortoise has a head start. After all, by the time Achilles reaches the point where the tortoise is now, it will have moved on a little. By the time he reaches that point, it will have moved a little further, and so on ad infinitum. Thus the ‘catching-up’ procedure requires Achilles to perform an infinite number of catching-up steps, which as a finite being he supposedly cannot do. But what Achilles can do cannot be discovered by pure logic. It depends entirely on what the governing laws of physics say he can do. And if those laws say he will overtake the tortoise, then overtake it he will. According to classical physics, catching up requires an infinite number of steps of the form ‘move to the tortoise’s present location’. In that sense it is a computationally infinite operation. Equivalently, considered as a proof that one abstract quantity becomes larger than another when a given set of operations is applied, it is a proof with an infinite number of steps. But the relevant laws designate it as a physically finite process — and that is all that counts.

Gödel’s intuition about steps and finiteness does, as far as we know, capture real physical constraints on the process of proof. Quantum theory requires discrete steps, and none of the known ways in which physical objects can interact would allow for an infinite number of steps to precede a measurable conclusion. (It might, however, be possible for an infinite number of steps to be completed in the whole history of the universe — as I shall explain in Chapter 14.) Classical physics would not have conformed to these intuitions if (impossibly) it had been true. For example, the continuous motion of classical systems would have allowed for ‘analogue’ computation which did not proceed in steps and which had a substantially different repertoire from the universal Turing machine. Several examples are known of contrived classical laws under which an infinite amount of computation (infinite, that is, by Turing-machine or quantum-computer standards) could be performed by physically finite methods. Of course, classical physics is incompatible with the results of countless experiments, so it is rather artificial to speculate on what the ‘actual’ classical laws of physics ‘would have been’; but what these examples show is that one cannot prove, independently of any knowledge of physics, that a proof must consist of finitely many steps. The same considerations apply to the intuition that there must be finitely many rules of inference, and that these must be ‘straightforwardly applicable’. None of these requirements is meaningful in the abstract: they are physical requirements. Hilbert, in his influential essay ‘On the Infinite’, contemptuously ridiculed the idea that the ‘finite-number-of-steps’ requirement is a substantive one. But the above argument shows that he was mistaken: it is substantive, and it follows only from his and other mathematicians’ physical intuition.