Why can’t we do all this simultaneously? One reason for this could simply be that our resources for making and using plans has only evolved rather recently—that is, in only a few million years—and so, we do not yet have multiple copies of them. In other words, we don’t yet much capacity at our highest levels of ‘management’—for example, resources for keeping track of what’s left to be done and for finding ways to achieve those goals without causing too many internal conflicts. Also, our processes for doing such things are likely to use the kinds of symbolic descriptions discussed below—and those resources are limited too. If so, then our only option will be to focus on each of those goals sequentially.[56]
This sort of mutual exclusiveness could be a principle reason why we sometimes describe our thoughts as flowing in a ‘stream of consciousness’—or as taking the form of an ‘inner monologue’—a process in which a sequence of thoughts seems to resemble a story or narrative.[57] When our resources are limited, we may have no alternative to the rather slow ‘serial processing’ that so frequently is a prominent feature of what we call “high-level thinking.”[58]
Symbolic Descriptions: Why would we need to use symbols or words rather than, say, direct connections between cells in the brain?
Many researchers have developed schemes for learning from experience, by making and changing connections between various parts of systems called ‘neural networks’ or ‘connectionist learning machines.’[59] Such systems have proved to be able for learning to recognize various kinds of patterns—and it seems quite likely that such low-level processes could underlie most of the functions inside our brains.[60] However, although such systems are very useful at doing many useful kinds of jobs, they cannot fulfill the needs of more reflective tasks, because they store information in the form numerical values that are hard for other resources to use. One can try to interpret these numbers as correlations or likelihoods, but they carry no other clues about what those links might otherwise signify. In other words, such representations don’t have much expressiveness. For example, a small such neural network might look like this.
In contrast, the diagram below shows what we call a “Semantic Network” that represents some of the relationships between the parts of a three-block Arch. For example, each link that points to the concept supports could be used to predict that the top block would fall if we removed a block that supports it.
Thus, whereas a ‘connectionist network’ shows only the ‘strength’ of each of those relations, and says nothing about those relations themselves, the three-way links of Semantic Networks can be used for many kinds of reasoning.
Self-Models: Why did you include ‘Self-Models’ among the processes in your first diagram?
When Joan was thinking about what she had done, she asked herself, “What would my friends have thought of me.” But the only way she could answer such questions would be to use some descriptions or models that represent her friends and herself. Some of Joan’s models of herself will be descriptions of her physical body, others will represent some of her goals, and yet others depict her dispositions in various social and physical contexts. Eventually we build additional structures include collections of stories about our own pasts, ways to describe our mental states, bodies of knowledge about our capacities, and depiction of our acquaintances. Chapter §9 will further discuss how we make and use ‘models’ of ourselves.
Once Joan possesses a set of such models, she can use them to think self-reflectively—and she’ll feel that she’s thinking about herself. If those reflections lead to some choices she makes, then Joan may feel that she is in “control of herself”—and perhaps apply the term ‘conscious’ to this. As for her other processes, if she suspects that they exist at all, she may represent them as beyond her control and call them ‘unconscious’ or ‘unintentional.’ And once we provide machines with such structures, perhaps they, too, will learn to make statements like, “I feel sure that you know just what I mean when I speak about ‘mental experiences.’
I don’t mean to insist that ‘detectors’ like these must be involved in all of the processes that we call consciousness. However, without some ways to recognize these particular patterns of mental conditions, we might not be able to talk about them!
This section began with some ideas about what we recognize when we talk about consciousness, and we suggested that this might relate to detecting some set of high-level activities.
However, we also ought to ask what might cause us to start up such sets of activities. This could be done in the opposite way: suppose among Joan’s resources are some ‘Trouble-Detectors’ or ‘Critics’ that detect when her thinking has got into trouble—for example, when she fails to achieve some important goal, or to overcome some obstacle. In such a condition, Joan might describe her state in terms of distress or frustration, and try to remedy this by a mental act that, expressed in words, might be “Now I should make myself concentrate.” Then she could try to switch to some way to think that engages more high-level processes—for example, by activating set of resources like these:
This suggests that we sometimes use ‘conscious’ to refer to activities that initiate rather than recognize sets of higher-level processes.
Student: How did you choose those particular features for your scheme to decide when to use words like ‘consciousness?’ Surely, since this is a suitcase-word, each person might make a different such list.
Indeed, just as we have multiple meanings for most of our other psychology-words, we’re likely to switch among different such feature-lists whenever we use words like ‘consciousness.’
4.3.1 The Immanence Illusion.
The paradox of consciousness—that the more consciousness one has, the more layers of processing divide one from the world—is, like so much else in nature, a trade-off. Progressive distancing from the external world is simply the price that is paid for knowing anything about the world at all. The deeper and broader [our] consciousness of the world becomes, the more complex the layers of processing necessary to obtain that consciousness.
When you enter a room you have the sense that you instantly see all the things in your view. However, this is an illusion because it will take time to recognize the objects that are actually there; then you’ll have to revise many wrong first impressions. Nevertheless, all this proceeds so quickly and smoothly that this requires an explanation—and we’ll propose one later in §8-3 Panalogy.
56
There are important exceptions to this. It would seem that experts like J.S. Bach developed ways to accomplish more multiple, yet still similar goals in parallel. However, as their skills improve, most such experts become less and less able to tell the rest of us how they do them.
57
William James discussed this extensively. See: http://psychclassics.yorku.ca/James/jimmy11.htm. Several other more modern ideas about this are developed in Daniel Dennett’s 1991 book,
58
So, despite a popular intuition, research on parallel processing has shown that such systems are frequently prone to end up accomplishing less for the same amount of computational power Nevertheless, if that cost can be borne, then the final result may come sooner!
59
See