Выбрать главу

Popper’s influence on scientists reflected his ability to capture features that investigators recognized in their own reasoning. Philosophers, however, were less convinced. For however much he emphasized the tentative character of acceptance, Popper—like the scientists who read him—plainly thought that surviving the eliminative process makes a hypothesis more worthy of being pursued or applied in a practical context. The “conjectures” are written into textbooks, taught to aspiring scientists, relied on in further research, and used as the basis for interventions in nature that sometimes affect the well-being of large numbers of people. If they attain some privileged status by enduring the fire of eliminative testing, then Popper’s view covertly presupposes a solution to the worry that elimination has merely isolated the best of a bad lot. If, on the other hand, the talk about “tentative acceptance” is taken seriously, and survival confers no special privilege, then it is quite mysterious why anybody should be entitled to use the science “in the books” in the highly consequential ways it is in fact used. Popper’s program was attractive because it embraced the virtues of eliminativism, but the rhetoric of “bold conjectures” and “tentative acceptance” should be viewed as a way of ducking a fundamental problem that eliminativists face.

A second major worry about eliminativism charged that the notion of falsification is more complex than eliminativists (including Popper) allowed. As the philosopher-physicist Pierre Duhem (1861–1916) pointed out, experiments and observations typically test a bundle of different hypotheses. When a complicated experiment reveals results that are dramatically at odds with predictions, a scientist’s first thought is not to abandon a cherished hypothesis but to check whether the apparatus is working properly, whether the samples used are pure, and so forth. A particularly striking example of this situation comes from the early responses to the Copernican system. Astronomers of the late 16th century, virtually all of whom believed in the traditional view that the heavenly bodies revolved around the Earth, pointed out that if, as Copernicus claimed, the Earth is in motion, then the stars should be seen at different angles at different times of the year; but no differences were observed, and thus Copernicanism, they concluded, is false. Galileo, a champion of the Copernican view, replied that the argument is fallacious. The apparent constancy of the angles at which the stars are seen is in conflict not with Copernicanism alone but with the joint hypothesis that the Earth moves and that the stars are relatively close. Galileo proposed to “save” Copernicanism from falsification by abandoning the latter part of the hypothesis, claiming instead that the universe is much larger than had been suspected and that the nearest stars are so distant that the differences in their angular positions cannot be detected with the naked eye. (He was vindicated in the 19th century, when improved telescopes revealed the stellar parallax.)

Eliminativism needs an account of when it is rationally acceptable to divert an experimental challenge to some auxiliary hypothesis and when the hypothesis under test should be abandoned. It must distinguish the case of Galileo from that of someone who insists on a pet hypothesis in the teeth of the evidence, citing the possibility that hitherto unsuspected spirits are disrupting the trials. The problem is especially severe for Popper’s version of eliminativism, since, if all hypotheses are tentative, there would appear to be no recourse to background knowledge, on the basis of which some possibilities can be dismissed as just not serious. Underdetermination

The complexities of the notion of falsification, originally diagnosed by Duhem, had considerable impact on contemporary philosophy of science through the work of the American philosopher W.V.O. Quine (1908–2000). Quine proposed a general thesis of the underdetermination of theory by evidence, arguing that it is always possible to preserve any hypothesis in the face of any evidence. This thesis can be understood as a bare logical point, to the effect that an investigator can always find some consistent way of dealing with observations or experiments so as to continue to maintain a chosen hypothesis (perhaps by claiming that the apparent observations are the result of hallucination). So conceived, it appears trivial. Alternatively, one can interpret it as proposing that all the criteria of rationality and scientific method permit some means of protecting the favoured hypothesis from the apparently refuting results. On the latter reading, Quine went considerably beyond Duhem, who held that the “good sense” of scientists enables them to distinguish legitimate from illegitimate ways of responding to recalcitrant findings.

The stronger interpretation of the thesis is sometimes inspired by a small number of famous examples from the history of physics. In the early 18th century, there was a celebrated debate between Leibniz and Samuel Clarke (1675–1729), an acolyte of Newton, over the “true motions” of the heavenly bodies. Clarke, following Newton, defined true motion as motion with respect to absolute space and claimed that the centre of mass of the solar system was at rest with respect to absolute space. Leibniz countered by suggesting that, if the centre of mass of the solar system were moving with uniform velocity with respect to absolute space, all the observations one could ever make would be the same as they would be if the universe were displaced in absolute space. In effect, he offered infinitely many alternatives to the Newtonian theory, each of which seemed equally well supported by any data that could be collected. Recent discussions in the foundations of physics sometimes suggested a similar moral. Perhaps there are rival versions of string theory, each of which is equally well supported by all the evidence that could become available.

Such examples, which illustrate the complexities inherent in the notion of falsification, raise two important questions: first, when cases of underdetermination arise, what is it reasonable to believe? And second, how frequently do such cases arise? One very natural response to the motivating examples from physics is to suggest that, when one recognizes that genuinely rival hypotheses could each be embedded in a body of theory that would be equally well supported by any available evidence, one should look for a more minimal hypothesis that will somehow “capture what is common” to the apparent alternatives. If that natural response is right, then the examples do not really support Quine’s sweeping thesis, for they do not permit the rationality of believing either (or any) of a pair (or collection) of alternatives but rather insist on articulating a different, more minimal, view.

A second objection to the strong thesis of underdetermination is that the historical examples are exceptional. Certain kinds of mathematical theories, together with plausible assumptions about the evidence that can be collected, allow for the formulation of serious alternatives. In most areas of science, however, there is no obvious way to invoke genuine rivals. Since the 1950s, for example, scientists have held that DNA molecules have the structure of a double helix, in which the bases jut inward, like the rungs of a ladder, and that there are simple rules of base pairing. If Quine’s global thesis were correct, there should be some scientific rival that would account equally well for the vast range of data that supports this hypothesis. Not only has no such rival been proposed, but there are simply no good reasons for thinking that any exists.