One of the attractive features of this approach to confirmation is that when the evidence would be highly improbable if the hypothesis were false—that is, when Pr−H(E) is extremely small—it is easy to see how a hypothesis with a quite low prior probability can acquire a probability close to 1 when the evidence comes in. (This holds even when Pr(H) is quite small and Pr(−H), the probability that H is false, correspondingly large; if E follows deductively from H, PrH(E) will be 1; hence, if Pr−H(E) is tiny, the numerator of the right side of the formula will be very close to the denominator, and the value of the right side thus approaches 1.)
Any use of Bayes’s theorem to reconstruct scientific reasoning plainly depends on the idea that scientists can assign the pertinent probabilities, both the prior probabilities and the probabilities of the evidence conditional on various hypotheses. But how should scientists conclude that the probability of an interesting hypothesis takes on a particular value or that a certain evidential finding would be extremely improbable if the interesting hypothesis were false? The simple example about drawing from a deck of cards is potentially misleading in this respect, because in this case there seems to be available a straightforward means of calculating the probability that a specific card, such as the king of hearts, will be drawn. There is no obvious analogue with respect to scientific hypotheses. It would seem foolish, for example, to suppose that there is some list of potential scientific hypotheses, each of which is equally likely to hold true of the universe.
Bayesians are divided in their responses to this difficulty. A relatively small minority—the so-called “objective” Bayesians—hope to find objective criteria for the rational assignment of prior probabilities. The majority position—“subjective” Bayesianism, sometimes also called personalism—supposes, by contrast, that no such criteria are to be found. The only limits on rational choice of prior probabilities stem from the need to give each truth of logic and mathematics the probability 1 and to provide a value different from both 0 and 1 for every empirical statement. The former proviso reflects the view that the laws of logic and mathematics cannot be false; the latter embodies the idea that any statement whose truth or falsity is not determined by the laws of logic and mathematics might turn out to be true (or false).
On the face of it, subjective Bayesianism appears incapable of providing any serious reconstruction of scientific reasoning. Thus, imagine two scientists of the late 17th century who differ in their initial assessments of Newton’s account of the motions of the heavenly bodies. One begins by assigning the Newtonian hypothesis a small but significant probability; the other attributes a probability that is truly minute. As they collect evidence, both modify their probability judgments in accordance with Bayes’s theorem, and, in both instances, the probability of the Newtonian hypothesis goes up. For the first scientist it approaches 1. The second, however, has begun with so minute a probability that, even with a large body of positive evidence for the Newtonian hypothesis, the final value assigned is still tiny. From the subjective Bayesian perspective, both have proceeded impeccably. Yet, at the end of the day, they diverge quite radically in their assessment of the hypothesis.
If one supposes that the evidence obtained is like that acquired in the decades after the publication of Newton’s hypothesis in his Principia (Philosophiae naturalis principia mathematica, 1687), it may seem possible to resolve the issue as follows: even though both investigators were initially skeptical (both assigned small prior probabilities to Newton’s hypothesis), one gave the hypothesis a serious chance and the other did not; the inquirer who started with the truly minute probability made an irrational judgment that infects the conclusion. No subjective Bayesian can tolerate this diagnosis, however. The Newtonian hypothesis is not a logical or mathematical truth (or a logical or mathematical falsehood), and both scientists give it a probability different from 0 and 1. By subjective Bayesian standards, that is all rational inquirers are asked to do.
The orthodox response to worries of this type is to offer mathematical theorems that demonstrate how individuals starting with different prior probabilities will eventually converge on a common value. Indeed, were the imaginary investigators to keep going long enough, their eventual assignments of probability would differ by an amount as tiny as one cared to make it. In the long run, scientists who lived by Bayesian standards would agree. But, as the English economist (and contributor to the theory of probability and confirmation) John Maynard Keynes (1883–1946) once observed, “in the long run we are all dead.” Scientific decisions are inevitably made in a finite period of time, and the same mathematical explorations that yield convergence theorems will also show that, given a fixed period for decision making, however long it may be, there can be people who satisfy the subjective Bayesian requirements and yet remain about as far apart as possible, even at the end of the evidence-gathering period. Eliminativism and falsification
Subjective Bayesianism is currently the most popular view of the confirmation of scientific hypotheses, partly because it seems to accord with important features of confirmation and partly because it is both systematic and precise. But the worry just outlined is not the only concern that critics press and defenders endeavour to meet. Among others is the objection that explicit assignments of probabilities seem to figure in scientific reasoning only when the focus is on statistical hypotheses. A more homely view of testing and the appraisal of hypotheses suggests that scientists proceed by the method of Sherlock Holmes: they formulate rival hypotheses and apply tests designed to eliminate some until the hypothesis that remains, however antecedently implausible, is judged correct. Unlike Bayesianism, this approach to scientific reasoning is explicitly concerned with the acceptance and rejection of hypotheses and thus seems far closer to the everyday practice of scientists than the revision of probabilities. But eliminativism, as this view is sometimes called, also faces serious challenges.
The first main worry centres on the choice of alternatives. In the setting of the country-house murder, Sherlock Holmes (or his counterpart) has a clear list of suspects. In scientific inquiries, however, no such complete roster of potential hypotheses is available. For all anyone knows, the correct hypothesis might not figure among the rivals under consideration. How then can the eliminative procedure provide any confidence in the hypothesis left standing at the end? Eliminativists are forced to concede that this is a genuine difficulty and that there can be many situations in which it is appropriate to wonder whether the initial construction of possibilities was unimaginative. If they believe that inquirers are sometimes justified in accepting the hypothesis that survives an eliminative process, then they must formulate criteria for distinguishing such situations. By the early 21st century, no one had yet offered any such precise criteria.
An apparent method of avoiding the difficulty just raised would be to emphasize the tentative character of scientific judgment. This tactic was pursued with considerable thoroughness by the Austrian-born British philosopher Karl Popper (1902–92), whose views about scientific reasoning probably had more influence on practicing scientists than those of any other philosopher. Although not himself a logical positivist, Popper shared many of the aspirations of those who wished to promote “scientific philosophy.” Instead of supposing that traditional philosophical discussions failed because they lapsed into meaninglessness, he offered a criterion of demarcation in terms of the falsifiability of genuine scientific hypotheses. That criterion was linked to his reconstruction of scientific reasoning: science, he claimed, consists of bold conjectures that scientists endeavour to refute, and the conjectures that survive are given tentative acceptance. Popper thus envisaged an eliminative process that begins with the rival hypotheses that a particular group of scientists happen to have thought of, and he responded to the worry that the successful survival of a series of tests might not be any indicator of truth by emphasizing that scientific acceptance is always tentative and provisional.