Similarly, if we understand knowledge and adaptation as structure which extends across large numbers of universes, then we expect the principles of epistemology and evolution to be expressible directly as laws about the structure of the multiverse. That is, they are physical laws, but at an emergent level. Admittedly, quantum complexity theory has not yet reached the point where it can express, in physical terms, the proposition that knowledge can grow only in situations that conform to the Popperian pattern shown in Figure 3.3. But that is just the sort of proposition that I expect to appear in the nascent Theory of Everything, the unified explanatory and predictive theory of all four strands.
That being so, the view that quantum physics is swallowing the other strands must be regarded merely as a narrow, physicist’s perspective, tainted, perhaps, by reductionism. Indeed, each of the other three strands is quite rich enough to form the whole foundation of some people’s world-view in much the same way that fundamental physics forms the foundation of a reductionist’s world-view. Richard Dawkins thinks that ‘If superior creatures from space ever visit Earth, the first question they will ask, in order to assess the level of our civilisation, is: “Have they discovered evolution yet?”’ Many philosophers have agreed with Rene Descartes that epistemology underlies all other knowledge, and that something like Descartes’s cogito ergo sum argument is our most basic explanation. Many computer scientists have been so impressed with recently discovered connections between physics and computation that they have concluded that the universe is a computer, and the laws of physics are programs that run on it. But all these are narrow, even misleading perspectives on the true fabric of reality. Objectively, the new synthesis has a character of its own, substantially different from that of any of the four strands it unifies.
For example, I have remarked that the fundamental theories of each of the four strands have been criticized, in part justifiably, for being ‘naïve’, ‘narrow’, ‘cold’, and so on. Thus, from the point of view of a reductionist physicist such as Stephen Hawking, the human race is just an astrophysically insignificant ‘chemical scum’. Steven Weinberg thinks that ‘The more the universe seems comprehensible, the more it also seems pointless. But if there is no solace in the fruits of our research, there is at least some consolation in the research itself.’ (The First Three Minutes, p. 154.) But anyone not involved in fundamental physics must wonder why.
As for computation, the computer scientist Tomasso Toffoli has remarked that ‘We never perform a computation ourselves, we just hitch a ride on the great Computation that is going on already.’ To him, this is no cry of despair — quite the contrary. But critics of the computer-science world-view do not want to see themselves as just someone else’s program running on someone else’s computer. Narrowly conceived evolutionary theory considers us mere ‘vehicles’ for the replication of our genes or memes; and it refuses to address the question of why evolution has tended to create ever greater adaptive complexity, or the role that such complexity plays in the wider scheme of things. Similarly, the (crypto-)inductivist critique of Popperian epistemology is that, while it states the conditions for scientific knowledge to grow, it seems not to explain why it grows — why it creates theories that are worth using.
As I have explained, the defence in each case depends on adducing explanations from some of the other strands. We are not merely ‘chemical scum’, because (for instance) the gross behaviour of our planet, star and galaxy depend on an emergent but fundamental physical quantity: the knowledge in that scum. The creation of useful knowledge by science, and adaptations by evolution, must be understood as the emergence of the self-similarity that is mandated by a principle of physics, the Turing principle. And so on.
Thus the problem with taking any of these fundamental theories individually as the basis of a world-view is that they are each, in an extended sense, reductionist. That is, they have a monolithic explanatory structure in which everything follows from a few extremely deep ideas. But that leaves aspects of the subject entirely unexplained. In contrast, the explanatory structure that they jointly provide for the fabric of reality is not hierarchicaclass="underline" each of the four strands contains principles which are ‘emergent’ from the perspective of the other three, but nevertheless help to explain them.
Three of the four strands seem to rule out human beings and human values from the fundamental level of explanation. The fourth, epistemology, makes knowledge primary but gives no reason to regard epistemology itself as having relevance beyond the psychology of our own species. Knowledge seems a parochial concept until we consider it from a multiverse perspective. But if knowledge is of fundamental significance, we may ask what sort of role now seems natural for knowledge-creating beings such as ourselves in the unified fabric of reality. This question has been explored by the cosmologist Frank Tipler. His answer, the omega-point theory, is an excellent example of a theory which is, in the sense of this book, about the fabric of reality as a whole. It is not framed within any one strand, but belongs irreducibly to all four. Unfortunately Tipler himself, in his book The Physics of Immortality, makes exaggerated claims for his theory which have caused most scientists and philosophers to reject it out of hand, thereby missing the valuable core idea which I shall now explain.
From my own perspective, the simplest point of entry to the omega-point theory is the Turing principle. A universal virtual-reality generator is physically possible. Such a machine is able to render any physically possible environment, as well as certain hypothetical and abstract entities, to any desired accuracy. Its computer therefore has a potentially unlimited requirement for additional memory, and may run for an unlimited number of steps. This was trivial to arrange in the classical theory of computation, so long as the universal computer was thought to be purely abstract. Turing simply postulated an infinitely long memory tape (with, as he thought, self-evident properties), a perfectly accurate processor requiring neither power nor maintenance, and unlimited time available. Making the model more realistic by allowing for periodic maintenance raises no problem of principle, but the other three requirements — unlimited memory capacity, and an unlimited running time and energy supply — are problematic in the light of existing cosmological theory. In some current cosmological models, the universe will recollapse in a Big Crunch after a finite time, and is also spatially finite. It has the geometry of a ‘3-sphere’, the three-dimensional analogue of the two-dimensional surface of a sphere. On the face of it, such a cosmology would place a finite bound on both the memory capacity and the number of processing steps the machine could perform before the universe ended. This would make a universal computer physically impossible, so the Turing principle would be violated. In other cosmological models the universe continues to expand for ever and is spatially infinite, which might seem to allow for an unlimited source of material for the manufacture of additional memory. Unfortunately, in most such models the density of energy available to power the computer would diminish as the universe expanded, and would have to be collected from ever further afield. Because physics imposes an absolute speed limit, the speed of light, the computer’s memory accesses would have to slow down and the net effect would again be that only a finite number of computational steps could be performed.