Выбрать главу

§8-4. How does Human Learning work?

The word ‘learning’ is useful in everyday life—but when we look closely we see that it includes many ways that our brains can change themselves. To understand how our minds grow, we would need ideas about how people learn such different skills as how to build a tower or tie a shoe, or to understand what a new word means, or how a person can learn to guess what their friends are thinking about. If we tried to describe all the ways in which we learn, we’d find ourselves starting a very long list that includes such methods as these:

Learning by adding new If-Do-Then rules, or by changing low-level connections,

Learning by making new subgoals for goals, or finding better search techniques,

Learning by changing or adding descriptions, statements and narrative stories.

Learning by changing existing processes.

Learning to prevent mistakes by making Suppressors and Censors.

Learning to make better generalizations.

Learning new Selectors and Critics that provide us with new Ways to Think,

Learning new links among our fragments of knowledge.

Learning to make new kinds of analogies.

Learning to make new models, virtual worlds, and other types of representations.

As our children develop, they not only learn particular things, but they also acquire new thinking techniques. However, there is no evidence that, by itself, an infant could ever invent enough such things. So, perhaps the most important human skill is to learn, not only from one’s own experience, but also to learn from being told things by others. However, long ago, the philosopher Hume raised a yet more fundamental question, namely of why learning is possible at alclass="underline"

David Hume: “All inferences from experience suppose, as their foundation, that the future will resemble the past, and that similar powers will be conjoined with similar sensible qualities. If there be any suspicion that the course of nature may change, and that the past may be no rule for the future, all experience becomes useless, and can give rise to no inference or conclusion.”[159]

In other words, learning itself can only work in a suitably uniform universe. But still we need to ask how learning works. In particular, the following section will ask how a person can learn so much from seeing a single example.[160]

∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞

How do we Learn so Rapidly?

No other creatures come close to us in being able to learn so much—and we do this with astonishing speed, as compared to other animals. Here’s an example that illustrates this:

Jack saw a dog do a certain trick, so he tried to teach it to his own pet, but Jack’s dog needed hundreds of lessons to learn it. Yet Jack learned that trick from seeing it only once. How did Jack so quickly learn so much—although he has only only seen one instance of it?

People sometimes need long sessions of practice, but we need to explain those occasions in which we learn so much from a single experience. However, here is a theory which suggests that Jack does indeed need many repetitions—but he does them by using an ‘animal trainer’ inside his head, which he uses to train other resources inside his brain, in much the same ways that he would use to teach his pet!

To do this, Jack could use a process like the Difference-Engine in §6-3. It begins with a description D of that trick, which is now in his ‘short-term memory.’ Then Jack’s ‘mental animal trainer’ can work to produce a copy C of this in some other, more permanent place—by repeatedly altering C until there remains no significant difference between it and D. Of course, if D has many intricate parts, this cycle will need many repetitions.[161]

A “Mental Animal Trainer”

So this ‘animal-trainer’ theory suggests that when a person learns something new, they do need multiple repetitions. However, we are rarely aware of this, perhaps because that process goes on in parts of the brain that our reflective thinking cannot perceive.

Student: Why can’t we simply remember D by making that short-term memory more permanent—in the same place where it now is stored? Why should we have to copy it to some other, different place?

We are all familiar with the fact that our short-term memories are limited. For example, most persons can repeat a list of five or six items, but when there are ten or more items, then we reach for a writing pad. I suspect that this is because each of our fast-access ‘memory boxes’ is based on such a substantial amount of machinery that each brain includes only a few of them—and so, we cannot afford to use them up. This would answer the question our student asked: each time we made those connections more permanent we would lose a valuable short-term memory box!

This could account for the well-known fact that whatever we learn is first stored temporarily—and then it may take an hour or more to convert it into a more permanent form. For example, this would explain why a blow to the head can cause one to lose all memory of what happened before that accident. Indeed, that ‘transfer to long-term memory’ process sometimes can take a whole day or more, and often requires substantial intervals of sleep.[162]

What could be the different between our short- and long-term memory systems? One answer to that appears to be our short-term memory systems use certain short-lived chemicals, so that those memories will quickly fade unless unless those chemicals keep being refreshed; in contrast, we have good evidence that long-term memories depend on the production of longer-lived proteins that make more permanent connections between the cells of the brain.[163]

It probably is no coincidence that modern computers evolved in a somewhat similar pattern: at every stage of development, fast-acting memory boxes were much more costly than slower ones. Consequently, computer designers invented ways to confine those expensive but fast-acting units into devices called ‘caches’ that only store data that is likely soon to be needed again. Today, a modern computer has several such caches that work at various different speeds, and the faster each is, the smaller it is. It may be the same inside our brains.

Here are a few other reasons why our memory systems may have evolved to require so much time and processing.

Retrievaclass="underline" When one makes a memory record, it would make no sense to store this away without providing some ways to retrieve it. This means that each record must also be made with links that will activate it when relevant (for example, by linking each new memory to some other existing panalogy).

Credit Assignment: A record of how one solved a problem would be unlikely to have much future use if it applied only to that particular problem. We’ll discuss this more in §8-5.

The ‘Real-Estate’ problem for Long-term memories. How could an ‘animal-trainer’ find room in the brain for the copy that it is trying to make? How could it find appropriate networks of brain cells to use, without disrupting connections and records that one would not want to erase? So, finding places for new memories may involve complex constraints and requirements, and this could be a reason why making permanent records takes so much time.

вернуться

159

David Hume, ibid. Part II.

вернуться

160

Hume was especially concerned with this question of how evidence can lead to conclusions: “It is only after a long course of uniform experiments in any kind, that we attain a firm reliance and security with regard to a particular event. Now where is that process of reasoning which, from one instance, draws a conclusion, so different from that which it infers from a hundred instances that are nowise different from that single one? I cannot find, I cannot imagine any such reasoning.”

вернуться

161

Note that this is a difference-engine ‘in reverse,’ because it changes the internal description, instead of changing the actual situation. See “Verbal expression” in SoM §22.10).

вернуться

162

Robert Stickgold et al., Cognitive Neuroscience (Vol. 12, No. 2) in March 2000, also Nature Neuroscience (Vol. 3, No. 12) in December 2000.

вернуться

163

Another answer might be that some information is stored ‘dynamically’—for example by being repeatedly echoed between two or more different clusters of brain cells.