The halo effect (and its devilish opposite) is really just a special case of a more general phenomenon: just about anything that hangs around in our mind, even a stray word or two, can influence how we perceive the world and what we believe. Take, for example, what happens if I ask you to memorize this list of words: furniture, self-confident, corner, adventuresome, chair, table, independent, and television. (Got that? What follows is more fun if you really do try to memorize the list.)
Now read the following sketch, about a man named Donald:
Donald spent a great amount of his time in search of what he liked to call excitement. He had already climbed Mt. McKinley, shot the Colorado rapids in a kayak, driven in a demolition derby, and piloted a jet-powered boat — without knowing very much about boats. He had risked injury, and even death, a number of times. Now he was in search of new excitement. He was thinking, perhaps, he would do some skydiving or maybe cross the Atlantic in a sailboat.
To test your comprehension, I ask you to sum up Donald in a single word. And the word that pops into your mind is… (see the footnote).[14] Had you memorized a slightly different list, say, furniture, conceited, corner, reckless, chair, table, aloof television, the first word that would have come to mind would likely be different — not adventuresome, but reckless. Donald may perfectly well be both reckless and adventuresome, but the connotations of each word are very different — and people tend to pick a characterization that relates to what was already on their mind (in this case, slyly implanted by the memory list). Which is to say that your impression of Donald is swayed by a bit of information (the words in the memory list) that ought to be entirely irrelevant.
Another phenomenon, called the “focusing illusion,” shows how easy it is to manipulate people simply by directing their attention to one bit of information or another. In one simple but telling study, college students were asked to answer two questions: “How happy are you with your life in general?” and “How many dates did you have last month?” One group heard the questions in exactly that order, while another heard them in the opposite order, second question first. In the group that heard the question about happiness first, there was almost no correlation between the people’s answers; some people who had few dates reported that they were happy, some people with many dates reported that they were sad, and so forth. Flipping the order of the questions, however, put people’s focus squarely on romance; suddenly, they could not see their happiness as independent of their love life. People with lots of dates saw themselves as happy, people with few dates viewed themselves as sad. Period. People’s judgments in the dates-first condition (but not in the happiness-first condition) were strongly correlated with the number of dates they’d had. This may not surprise you, but it ought to, because it highlights just how malleable our beliefs really are. Even our own internal sense of self can be influenced by what we happen to focus on at a given moment.
The bottom line is that every belief passes through the unpredictable filter of contextual memory. Either we directly recall a belief that we formed earlier, or we calculate what we believe based on whatever memories we happen to bring to mind.
Yet few people realize the extent to which beliefs can be contaminated by vagaries of memory. Take the students who heard the dating question first. They presumably thought that they were answering the happiness question as objectively as they could; only an exceptionally self-aware undergraduate would realize that the answer to the second question might be biased by the answer to the first. Which is precisely what makes mental contamination so insidious. Our subjective impression that we are being objective rarely matches the objective reality: no matter how hard we try to be objective, human beliefs, because they are mediated by memory, are inevitably swayed by minutiae that we are only dimly aware of.
From an engineering standpoint, humans would presumably be far better off if evolution had supplemented our contextually driven memory with a way of systematically searching our inventory of memories. Just as a pollster’s data are most accurate if taken from a representative cross section of a population, a human’s beliefs would be soundest if they were based on a balanced set of evidence. But alas, evolution never discovered the statistician’s notion of an unbiased sample.
Instead, we routinely take whatever memories are most recent or most easily remembered to be much more important than any other data. Consider, for example, an experience I had recently, driving across country and wondering at what time I’d arrive at the next motel. When traffic was moving well, I’d think to myself, “Wow, I’m driving at 80 miles per hour on the interstate; I’ll be there in an hour.” When traffic slowed due to construction, I’d say, “Oh no, it’ll take me two hours.” What I was almost comically unable to do was to take an average across two data points at the same time, and say, “Sometimes the traffic moves well, sometimes it moves poorly. I anticipate a mixture of good and bad, so I bet it will take an hour and a half.”
Some of the world’s most mundane but common interpersonal friction flows directly from the same failure to reflect on how well our samples represent reality. When we squabble with our spouse or our roommate about whose turn it is to wash the dishes, we probably (without realizing it) are better able to remember the previous times when we, ourself, took care of them (as compared to the times when our roommate or spouse did); after all, our memory is organized to focus primarily on our own experience. And we rarely compensate for that imbalance — so we come to believe we’ve done more work overall and perhaps end up in a self-righteous huff. Studies show that in virtually any collaborative enterprise, from taking care of a household to writing academic papers with colleagues, the sum of each individual’s perceived contribution exceeds the total amount of work done. We cannot remember what other people did as well as we recall what we did ourselves — which leaves everybody (even shirkers!) feeling that others have taken advantage of them. Realizing the limits of our own data sampling might make us all a lot more generous.
Mental contamination is so potent that even entirely irrelevant information can lead us by the nose. In one pioneering experiment, the psychologists Amos Tversky and Daniel Kahneman spun a wheel of fortune, marked with the numbers 1-100, and then asked their subjects a question that had nothing to do with the outcome of spinning the wheeclass="underline" what percentage of African countries are in the United Nations? Most participants didn’t know for sure, so they had to estimate — fair enough. But their estimates were considerably affected by the number on the wheel. When the wheel registered 10, a typical response to the UN question was 25 percent, whereas when the wheel came up at 65, a typical response was 45 percent.[15]
This phenomenon, which has come to be known as “anchoring and adjustment,” occurs again and again. Try this one: Add 400 to the last three digits of your cell phone number. When you’re done, answer the following question: in what year did Attila the Hun’s rampage through Europe finally come to an end? The average guess of people whose phone number, plus 400, yields a sum less than 600 was A.D. 629, whereas the average guess of people whose phone number digits plus 400 came in between 1,200 and 1,399 was A.D. 979, 350 years later.[16]
15
Nobody’s ever been able to tell me whether the original question was meant to ask how many of the countries in Africa were in the UN, or how many of the countries in the UN were in Africa. But in a way, it doesn’t matter: anchoring is strong enough to apply even when we don’t know precisely what the question is.
16
When did Attila actually get routed? A.D. 451, if you’re aware of the process of anchoring and adjustment, you can see that why it is that during a financial negotation it’s generally better to make the opening bid than to respond to it. This phenomenon also explains why, as one recent study showed, supermarkets can sell more cans of soup with signs that say LIMIT 12 PER CUSTOMER rather than LIMIT 4 PER CUSTOMER.