Выбрать главу

A number of objections of varying levels of sophistication have been made against CRTT. Introspection

A once-common criticism was that people’s introspective experiences of their thinking are nothing like the computational processes that CRTT proposes are constitutive of human thought. However, like most modern psychological theories since at least the time of Freud, CRTT does not purport to be an account of how a person’s psychological life appears introspectively to him, and it is perfectly compatible with the sense that many people have that they think not in words but in images, maps, or various sorts of somatic feelings. CRTT is merely a claim about the underlying processes in the brain, the surface appearances of which can be as remote from the character of those processes as the appearance of an image on a screen can be from the inner workings of a computer. Homunculi

Another frequent objection against theories like CRTT, originally voiced by Wittgenstein and Ryle, is that they merely reproduce the problems they are supposed to solve, since they invariably posit processes—such as following rules or comparing one thing with another—that seem to require the very kind of intelligence that the theory is supposed to explain. Another way of formulating the criticism is to say that computational theories seem committed to the existence in the mind of “homunculi,” or “little men,” to carry out the processes they postulate.

This objection might be a problem for a theory such as Freud’s, which posits entities such as the superego and processes such as the unconscious repression of desires. It is not a problem, however, for CRTT, because the central idea behind the development of the theory is Turing’s characterization of computation in terms of the purely mechanical steps of a Turing machine. These steps, such as moving left or right one cell at a time, are so simple and “stupid” that they can obviously be executed without the need of any intelligence at all. Artifactuality and artificial intelligence (AI)

It is frequently said that people cannot be computers because whereas computers are “programmed” to do only what the programmer tells them to do, people can do whatever they like. However, this is decreasingly true of increasingly clever machines, which often come up with specific solutions to problems that certainly might not have occurred to their programers (there is no reason why good chess programmers themselves need to be good chess players). Moreover, there is every reason to think that, at some level, human beings are indeed “programmed,” in the sense of being structured in specific ways by their physical constitutions. The American linguist Noam Chomsky, for example, has stressed the very specific ways in which the brains of human beings are innately structured to acquire, upon exposure to relevant data, only a small subset of all the logically possible languages with which the data are compatible. Searle’s “Chinese room”

In a widely reprinted paper, “Minds, Brains, and Programs” (1980), Searle claimed that mental processes cannot possibly consist of the execution of computer programs of any sort, since it is always possible for a person to follow the instructions of the program without undergoing the target mental process. He offered the thought experiment of a man who is isolated in a room in which he produces Chinese sentences as “output” in response to Chinese sentences he receives as “input” by following the rules of a program for engaging in a Chinese conversation—e.g., by using a simple conversation manual. Such a person could arguably pass a Chinese-language Turing test for intelligence without having the remotest understanding of the Chinese sentences he is manipulating. Searle concluded that understanding Chinese cannot be a matter of performing computations on Chinese sentences, and mental processes in general cannot be reduced to computation.

Critics of Searle have claimed that his thought experiment suffers from a number of problems that make it a poor argument against CRTT. The chief difficulty, according to them, is that CRTT is not committed to the behaviourist Turing test for intelligence, so it need not ascribe intelligence to a device that merely presents output in response to input in the way that Searle describes. In particular, as a functionalist theory, CRTT can reasonably require that the device involve far more internal processing than a simple Chinese conversation manual would require. There would also have to be programs for Chinese grammar and for the systematic translation of Chinese words and sentences into the particular codes (or languages of thought) used in all of the operations of the machine that are essential to understanding Chinese—e.g., those involved in perception, memory, reasoning, and decision making. In order for Searle’s example to be a serious problem for CRTT, according to the theory’s proponents, the man in the room would have to be following programs for the full array of the processes that CRTT proposes to model. Moreover, the representations in the various subsystems would arguably have to stand in the kinds of relation to external phenomena proposed by the externalist theories of intentionality mentioned above. (Searle is right to worry about where meaning comes from but wrong to ignore the various proposals in the field.)

Defenders of CRTT argue that, once one begins to imagine all of this complexity, it is clear that CRTT is capable of distinguishing between the mental abilities of the system as a whole and the abilities of the man in the room. The man is functioning merely as the system’s “central processing unit”—the particular subsystem that determines what specific actions to perform when. Such a small part of the entire system does not need to have the language-understanding properties of the whole system, any more than Queen Victoria needs to have all of the properties of her realm.

Searle’s thought experiment is sometimes confused with a quite different problem that was raised earlier by Ned Block. This objection, which also (but only coincidentally) involves reference to China, applies not just to CRTT but to almost any functionalist theory of the mind. Block’s “nation of China”

There are more than one billion people in China, and there are roughly one billion neurons in the brain. Suppose that the functional relations that functionalists claim are constitutive of human mental life are ultimately definable in terms of firing patterns among assemblages of neurons. Now imagine that, perhaps as a celebration, it is arranged for each person in China to send signals for four hours to other people in China in precisely the same pattern in which the neurons in the brain of Chairman Mao Zedong fired (or might have fired) for four hours on his 60th birthday. During those four hours Mao was pleased but then had a headache. Would the entire nation of China during the new four-hour period be in the same mental states that Mao was in on his 60th birthday? Would the entire nation be truly describable as being pleased and then having a headache? Although most people would find this suggestion preposterous, the functionalist might be committed to it if it turns out that the functional relations that are constitutive of mental states are defined in terms of the firing patterns of neurons. Of course, it may turn out that other functional relations are essential as well. But the worry is that, because any functional relation at all can be emulated by the nation of China, no set of functional relations will be adequate to capture mentality.