Выбрать главу

Of course, given the nascent state of many of these disciplines, CRTT is not nearly a finished theory. It is rather a research program, like the proposal in early chemistry that the chemical elements consist of some kind of atoms. Just as early chemists did not have a clue about the complexities that would eventually emerge about the nature of these atoms, so cognitive scientists probably do not have more than very general ideas about the character of the computations and representations that human thought actually involves. But, as in the case of atomic theory, CRTT seems to be steering research in promising directions. The computational account of rationality

The chief inspiration for CRTT was the development of formal logic, the modern systematization of deductive reasoning (see above Deduction). This systematization made at least deductive validity purely a matter of derivations (conclusions from premises) that are defined solely in terms of the form—the syntax, or spelling—of the sentences involved. The work of Turing showed how such formal derivations could be executed mechanically by a Turing machine, a hypothetical computing device that operates by moving forward and backward on an indefinitely long tape and scanning cells on which it prints and erases symbols in some finite alphabet. Turing’s demonstrations of the power of these machines strongly supported his claim (now called the Church-Turing thesis) that anything that can be computed at all can be computed by a Turing machine. This idea, of course, led directly to the development of modern computers, as well as to the more general research programs of artificial intelligence and cognitive science. The hope of CRTT was that all reasoning—deductive, inductive, abductive, and practical—could be reduced to this kind of mechanical computation (though it was naturally assumed that the actual architecture of the brain is not the same as the architecture of a Turing machine).

Note that CRTT is not the claim that any existing computer is, or has, a mind. Rather, it is the claim that having a mind consists of being a certain sort of computer—or, more plausibly, an elaborate assembly of many computers, each of which subserves a specific mental capacity (perception, memory, language processing, decision making, motor control, and so on). All of these computers are united in a complex “computational architecture” in which the output of one subsystem serves as the input to another. In his influential book Modularity of Mind (1983), Fodor went so far as to postulate separate “modules” for perception and language processing that are “informationally encapsulated.” Although the outputs of perceptual modules serve as inputs to systems of belief fixation, the internal processes of each module are segregated from each other—explaining, for example, why visual illusions persist even for people who realize that they are illusions. Proponents of CRTT believe that eventually it will be possible to characterize the nature of various mental phenomena, such as perception and belief, in terms of this sort of architecture. Supposing that there are subsystems for perception, belief formation, and decision making, belief in general might be defined as “the output of the belief-formation system that serves as the input to the decision-making system” (beliefs are, after all, just those states on which a person rationally acts, given his desires).

For example, a person’s memory that grass grows fast might be regarded as a state involving the existence of an electronic token of the sentence “Grass grows fast” in a certain location in the person’s brain. This sentence might be subject to computational processes of deductive, inductive, and abductive reasoning, yielding the sentence “My lawn will grow fast.” This sentence in turn might serve as input to the person’s decision-making system, where, one may suppose, there exists the desire that his lawn not be overgrown—i.e., a state involving a certain computational relation to an electronic token of the sentence “My lawn should not be overgrown.” Finally, this sentence and the previous one might be combined in standard patterns of decision theory to cause his body to move in such a way that he winds up dragging the lawn mower from the garage. (Of course, these same computational states may also cause any number of other nonrational effects—e.g., dreading, cursing, or experiencing a shot of adrenaline at the prospect of the labour involved.)

Although CRTT offers a promise of a theory of thought, it is important to appreciate just how far current research is from any actual fulfillment of that promise. In the 1960s the philosopher Hubert Dreyfus rightly ridiculed the naive optimism of early work in the area. Although it is not clear that he provided any argument in principle against its eventual success, it is worth noting that the position of contemporary theorists is not much better than that of Descartes, who observed that, although it is possible for machines to emulate this or that specific bit of intelligent behaviour, no machine has yet displayed the “universal reason” exhibited in the common sense of normal human beings. People seem to be able to integrate information from arbitrary domains to reach plausible overall conclusions, as when juries draw upon diverse information to render a verdict about whether the prosecution has established its case “beyond a reasonable doubt.” Indeed, despite his own commitment to CRTT as a necessary feature of any adequate theory of the mind, even Fodor doubts that CRTT is by itself sufficient for such a theory. Research strategies for intentionality

One of Turing’s achievements was to show how computations can be specified purely mechanically, in particular without any reference to the meanings of the symbols over which the computations are defined. Contrary to the assertions of some of CRTT’s critics, notably the American philosopher John Searle, specifying computations without reference to the meanings of symbols does not imply that the symbols do not have any meaning, any more than the fact that bachelors can be specified without mentioning their eating habits implies that bachelors do not eat. In fact, the symbols involved in computations typically have a very obvious meaning—referring, for example, to bank balances, interest rates, gamma globulin levels, or anything else that can be measured numerically. But, as already noted, the meaning or content of symbols used by ordinary computers is usually derived by stipulation from the intentional states of their programmers. In contrast, the symbols involved in human mental activity presumably have intrinsic meaning or intentionality. The real problem for CRTT, therefore, is how to explain the intrinsic meaning or intentionality of symbols in the brain.

This is really just an instance of the general problem already noted of filling the explanatory gap between the physical and the intentional—the problem of answering the challenge raised by Brentano’s thesis. No remotely adequate proposal has yet been made, but there are two serious research strategies that have been pursued in various ways by different philosophers. Inspired by the aforementioned “use” view of meaning urged by Wittgenstein, Ned Block and Christopher Peacocke have developed “internalist” theories according to which meaning is constituted by some features of a symbol’s causal (or conceptual) role within the brain, specifically the inferences in which it figures. For example, it might be constitutive of the meaning of the symbol “bachelor” that it be causally connected to a symbol whose meaning is “unmarried.” Others philosophers, such as Fred Dretske, Robert Stalnaker, and Fodor, have proposed “externalist” theories according to which the meaning of a symbol in the brain is constituted by various causal relations between the symbol and the phenomenon in the external world that it represents. For example, the symbol W might represent water by virtue of some causal, covariational relation it enjoys to actual water in the world: under suitable conditions, actual water causes an electronic token of W to appear in the brain. Alternatively, perhaps the entokening of W in the brain in the presence of actual water once provided a creature’s distant ancestors with some evolutionary advantage, as suggested in the work of Ruth Millikan and Karen Neander. There have been quite rich and subtle discussions of whether the thought contents of a system (a human being or an animal) must be specified “widely,” taking into account the environment the system inhabits, as in the work of Tyler Burge, or only “narrowly,” independently of any such environment, as in the work of Gabriel Segal. Objections and responses