Выбрать главу

The workaround by which we “reconstruct” memory for dates and times is but one example of the many clumsy techniques that humans use to cope with the lack of postal-code memory. If you Google for “memory tricks,” you’ll find dozens more.

Take for example, the ancient “method of loci.” If you have a long list of words to remember, you can associate each one with a specific room in a familiar large building: the first word with the vestibule, the second word with the living room, the third word with the dining room, the fourth with the kitchen, and so forth. This trick, which is used in adapted form by all the world’s leading mnemonists, works pretty well, since each room provides a different context for memory retrieval — but it’s still little more than a Band-Aid, one more solution we shouldn’t need in the first place.

Another classical approach, so prominent in rap music, is to use rhyme and meter as an aid to memorization. Homer had his hexameter, Tom Lehrer had his song “The Elements” (“There’s antimony, arsenic, aluminum, selenium, / And hydrogen and oxygen and nitrogen and rhenium…”), and the band They Might Be Giants have their cover of “Why Does the Sun Shine? (The Sun Is a Mass of Incandescent Gas).”

Actors often take these mnemonic devices one step further. Not only do they remind themselves of their next lines by using cues of rhythm, syntax, and rhyme; they also focus on their character’s motivations and actions, as well as those of other characters. Ideally, this takes place automatically. In the words of the actor Michael Caine, the goal is to get immersed in the story, rather than worry about specific lines. “You must be able to stand there not thinking of that line. You take it off the other actor’s face.” Some performers can do this rather well; others struggle with it (or rely on cue cards). The point is, memorizing lines will never be as easy for us as it would be for a computer. We retrieve memorized information not by reading files from a specific sector of the hard drive but by cobbling together as many clues as possible — and hoping for the best.

Even the oldest standby — simple rehearsal, repeating something over and over — is a bit of clumsiness that shouldn’t be necessary. Rote memorization works somewhat well because it exploits the brain’s attachment to memories based on frequently occurring events, but here too the solution is hardly elegant. An ideal memory system would capture information in a single exposure, so we wouldn’t have to waste time with flash cards or lengthy memorization sessions. (Yes, I’ve heard the rumors about the existence of photographic memory, but no, I’ve never seen a well-documented case.)

There’s nothing wrong with mnemonics and no end to the possibilities; any cue can help. But when they fail, we can rely on a different sort of solution — arranging our life to accommodate the limits of our memory. I, for example, have learned through long experience that the only way to deal with my congenital absent-mindedness is to develop habits that reduce the demands on my memory. I always put my keys in the same place, position anything I need to bring to work by the front door, and so forth. To a forgetful guy like me, a PalmPilot is a godsend. But the fact that we can patch together solutions doesn’t mean that our mental mechanisms are well engineered; it is a symptorn of the opposite condition. It is only the clumsiness of human memory that necessitates these tricks in the first place.

Given the liabilities of our contextual memory, it’s natural to ask whether its benefits (speed, for example) outweigh the costs. I think not, and not just because the costs are so high, but because it is possible in principle to have the benefits without the costs. The proof is Google (not to mention a dozen other search engines). Search engines start with an underlying substrate of postal-code memory (the well-mapped information they can tap into) and build contextual memory on top. The postal-code foundation guarantees reliability, while the context on top hints at which memories are most likely needed at a given moment. If evolution had started with a system of memory organized by location, I bet that’s exactly what we’d have, and the advantages would be considerable. But our ancestors never made it to that part of the cognitive mountain; once evolution stumbled upon contextual memory, it never wandered far enough away to find another considerably higher peak. As a result, when we need precise, reliable memories, all we can do is fake it — kluging a poor man’s approximation of postal-code memory onto a substrate that doesn’t genuinely provide for it.

In the final analysis, we would be nowhere without memory; as Steven Pinker once wrote, “To a very great extent, our memories are ourselves.” Yet memory is arguably the mind’s original sin. So much is built on it, and yet it is, especially in comparison to computer memory, wildly unreliable.

In no small part this is because we evolved not as computers but as actors, in the original sense of the word: as organisms that act, entities that perceive the world and behave in response to it. And that led to a memory system attuned to speed more than reliability. In many circumstances, especially those requiring snap decisions, recency, frequency, and context are powerful tools for mediating memory. For our ancestors, who lived almost entirely in the here and now (as virtually all nonhuman life forms still do), quick access to contextually relevant memories of recent events or frequently occurring ones helped navigate the challenges of seeking food or avoiding danger. Likewise, for a rat or a monkey, it is often enough to remember related general information. Concerns about misattribution or bias in courtroom testimony simply don’t apply.

But today, courts, employers, and many other facets of everyday life make demands that our pre-hominid predecessors rarely faced, requiring us to remember specific details, such as where we last put our keys (rather than where we tend, in general, to put them), where we’ve gotten particular information, and who told us what, and when.

To be sure, there will always be those who see our limits as virtues. The memory expert Henry Roediger, for example, has implied that memory errors are the price we pay in order to make inferences. The Harvard psychologist Dan Schacter, meanwhile, has argued that the fractured nature of memory prepares us for the future: “A memory that works by piecing together bits of the past may be better suited to simulating future events than one that is a store of perfect records.” Another common suggestion is that we’re better off because we can’t remember certain things, as if faulty memory would spare us from pain.

These ideas sound nice on the surface, but I don’t see any evidence to support them. The notion that the routine failures of human memory convey some sort of benefit misses an important point: the things that we have trouble remembering arent the things we’d like to forget. It’s easy to glibly imagine some kind of optimal state wherein we’d remember only happy thoughts, a bit like Dorothy at the end of The Wizard of Oz. But the truth is that we generally can’t — contrary to Freud — repress memories that we find painful, and we don’t automatically forget them either. What we remember isn’t a function of what we want to remember, and what we forget isn’t a matter of what we want to forget; any war veteran or Holocaust survivor could tell you that. What we remember and what we forget are a function of context, frequency, and recency, not a means of attaining inner peace. It’s possible to imagine a robot that could automatically expunge all unpleasant memories, but we humans are just not built that way.