Выбрать главу

Once we decide something is true (for whatever reason), we often make up new reasons for believing it. Consider, for example, a study that I ran some years ago. Half my subjects read a report of a study that showed that good firefighting was correlated with high scores on a measure of risk-taking ability; the other half of the subjects read the opposite: they were told of a study that showed that good firefighting was negatively correlated with risk-taking ability, that is, that risk takers made poor firefighters. Each group was then further subdivided. Some people were asked to reflect on what they read, writing down reasons for why the study they read about might have gotten the results it did; others were simply kept busy with a series of difficult geometrical puzzles like those found on an IQ test.

Then, as social psychologists so often do, I pulled the rug out from under my subjects: “Headline, this news just in — the study you read about in the first part of the experiment was a fraud. The scientists who allegedly studied firefighting actually made their data up! What I’d like to know is what you really think — is firefighting really correlated with risk taking?”

Even after I told people that the original study was complete rubbish, people in the subgroups who got a chance to reflect (and create their own explanations) continued to believe whatever they had initially read. In short, if you give someone half a chance to make up their own reasons to believe something, they’ll take you up on the opportunity and start to believe it — even if their original evidence is thoroughly discredited. Rational man, if he (or she) existed, would only believe what is true, invariably moving from true premises to true conclusions. Irrational man, kluged product of evolution that he (or she) is, frequently moves in the opposite direction, starting with a conclusion and seeking reasons to believe it.

Belief, I would suggest, is stitched together out of three fundamental components: a capacity for memory (beliefs would be of no value if they came and went without any long-term hold on the mind), a capacity for inference (deriving new facts from old, as just discussed), and a capacity for, of all things, perception.

Superficially, one might think of perception and belief as separate. Perception is what we see and hear, taste, smell, or feel, while belief is what we know or think we know. But in terms of evolutionary history, the two are not as different as they initially appear. The surest path to belief is to see something. When my wife’s golden retriever, Ari, wags his tail, I believe him to be happy; mail falls through the slot, and I believe the mail has arrived. Or, as Chico Marx put it, “Who are you gonna believe, me or your own eyes?”

The trouble kicks in when we start to believe things that we don’t directly observe. And in the modern world, much of what we believe is not directly or readily observable. Our capacity to acquire new beliefs vicariously — from friends, teachers, or the media, without direct experience — is a key to what allows humans to build cultures and technologies of fabulous complexity. My canine friend Ari learns whatever he learns primarily through trial and error; I learn what I learn mainly through books, magazines, and the Internet. I may cast some skepticism on what I read. (Did journalist-investigator Seymour Hersh really have a well-placed, anonymous source? Did movie reviewer Anthony Lane really even see Clerks IE) But largely, for better or worse, I tend to believe what I read, and I learn much of what I know through that medium. Ari (also for better or worse) knows only what he sees, hears, feels, tastes, or smells.

In the early 1990s, the psychologist Daniel Gilbert, now well known for his work on happiness, tested a theory that he traced back to the seventeenth-century philosopher Baruch de Spinoza. Spinoza’s idea was that “all information is [initially] accepted during comprehension and… false information… unaccepted [only later].” As a test of Spinoza’s hypothesis, Gilbert presented subjects with true and false propositions — sometimes interrupting them with a brief, distracting tone (which required them to press a button). Just as Spinoza might have predicted, interruptions increased the chance that subjects would believe the false proposition;[20] other studies showed that people are more likely to accept falsehoods if they are distracted or put under time pressure. The ideas we encounter are, other things being equal, automatically believed — unless and until there is a chance to properly evaluate them.

This difference in order (between hearing, accepting, and evaluating versus hearing, evaluating, and then accepting) might initially seem trivial, but it has serious consequences. Take, for example, a case that was recently described on Ira Glass’s weekly radio show This American Life. A lifelong political activist who was the leading candidate for chair of New Hampshire’s Democratic Party was accused of possessing substantial amounts of child pornography. Even though his accuser, a Republican state representative, offered no proof, the accused was forced to step down, his political career essentially ruined. A two-month investigation ultimately found no evidence, but the damage was done — our legal system may be designed around the principle of “innocent until proven guilty,” but our mind is not.

Indeed, as every good lawyer knows intuitively, just asking about some possibility can increase the chance that someone will believe it. (“Isn’t it true you’ve been reading pornographic magazines since you were twelve?” “Objection — irrelevant!”) Experimental evidence bears this out: merely hearing something in the form of a question — rather than a declarative statement — is often enough to induce belief.

Why do we humans so often accept uncritically what we hear? Because of the way in which belief evolved: from machinery first used in the service of perception. And in perception, a high percentage of what we see is true (or at least it was before the era of television and Photoshop). When we see something, it’s usually safe to believe it. The cycle of belief works in the same way — we gather some bit of information, directly, through our senses, or perhaps more often, indirectly through language and communication. Either way, we tend to immediately believe it and only later, if at all, consider its veracity.

вернуться

20

The converse wasn’t true: interrupting people’s consideration of true propositions didn’t lead to increased disbelief precisely because people initially accept that what they hear is true, whether or not they ultimately get a chance to properly evaluate it.