The bad news is that such exquisite rationality may well be the exception rather than the rule. People are as good as they are at the pointing-at-circles task because it draws on a mental capacity — the ability to reach for things — that is truly ancient. Reaching is close to a reflex, not just for humans, but for every animal that grabs a meal to bring it closer to its mouth; by the time we are adults, our reaching system is so well tuned, we never even think about it. For instance, in a strict technical sense, every time I reach for my cup of tea, I make a set of choices. I decide that I want the tea, that the potential pleasure and the hydration offered by the beverage outweigh the risk of spillage. More than that, and even less consciously, I decide at what angle to send my hand. Should I use my left hand (which is closer) or my right hand (which is better coordinated)? Should I grab the cylindrical central portion of the mug (which holds the contents that I really want) or go instead for the handle, a less direct but easier-to-grasp means to the tea that is inside? Our hands and muscles align themselves automatically, my fingers forming a pincer grip, my elbow rotating so that my hand is in perfect position. Reaching, central to life, involves many decisions, and evolution has had a long time to get them just right.
But economics is not supposed to be a theory of how people reach for coffee mugs; it’s supposed be a theory of how they spend their money, allocate their time, plan for their retirement, and so forth — it’s supposed to be, at least in part, a theory about how people make conscious decisions.
And often, the closer we get to conscious decision making, a more recent product of evolution, the worse our decisions become. When the NYU professors reworked their grasping task to make it a more explicit word problem, most subjects’ performance fell to pieces. Our more recently evolved deliberative system is, in this particular respect, no match for our ancient system for muscle control. Outside that rarefied domain, there are loads of circumstances in which human performance predictably defies any reasonable notion of rationality.
Suppose, for example, that I give you a choice between participating in two lotteries. In one lottery, you have an 89 percent chance of winning $1 million, a 10 percent chance of winning $5 million, and a 1 percent chance of winning nothing; in the other, you have a 100 percent chance of winning $1 million. Which do you go for? Almost everyone takes the sure thing.
Now suppose instead your choice is slightly more complicated. You can take either an 11 percent chance at $1 million or a 10 percent chance of winning $5 million. Which do you choose? Here, almost everyone goes for the second choice, a 10 percent shot at $5 million.
What would be the rational thing to do? According to the theory of rational choice, you should calculate your “expected utility,” or expected gain, essentially averaging the amount you would win across all the possible outcomes, weighted by their probability. An 11 percent chance at $1 million works out to an expected gain of $110,000; 10 percent at $5 million works out to an expected gain of $500,000, clearly the better choice. So far, so good. But when you apply the same logic to the first set of choices, you discover that people behave far less rationally. The expected gain in the lottery that is split 89 percent/ 10 percent/i percent is $1,390,000 (89 percent of $1 million plus 10 percent of $5 million plus 1 percent of $0), compared to a mere million for the sure thing. Yet nearly everyone goes for the million bucks — leaving close to half a million dollars on the table. Pure insanity from the perspective of “rational choice.”
Another experiment offered undergraduates a choice between two raffle tickets, one with 1 chance in 100 to win a $500 voucher toward a trip to Paris, the other, 1 chance in 100 to win a $500 voucher toward college tuition. Most people, in this case, prefer Paris. No big problem there; if Paris is more appealing than the bursar’s office, so be it. But when the odds increase from 1 in 100 to 99 out of 100, most people’s preferences reverse; given the near certainty of winning, most students suddenly go for the tuition voucher rather than the trip — sheer lunacy, if they’d really rather go to Paris.
To take an entirely different sort of illustration, consider the simple question I posed in the opening chapter: would you drive across town to save $25 on a $100 microwave? Most people would say yes, but hardly anybody would drive across town to save the same $25 on a $1,000 television. From the perspective of an economist, this sort of thinking too is irrational. Whether the drive is worth it should depend on just two things: the value of your time and the cost of the gas, nothing else. Either the value of your time and gas is less than $25, in which case you should make the drive, or your time and gas are worth more than $25, in which case you shouldn’t make the drive — end of story. Since the labor to drive across town is the same in both cases and the monetary amount is the same, there’s no rational reason why the drive would make sense in one instance and not the other.
On the other hand, to anyone who hasn’t taken a class in economics, saving $25 on $100 seems like a good deal (“I saved 25 percent!”), whereas saving $25 on $1,000 appears to be a stupid waste of time (“You drove all the way across town to get 2.5 percent off? You must have nothing better to do”). In the clear-eyed arithmetic of the economist, a dollar is a dollar is a dollar, but most ordinary people can’t help but think about money in a somewhat less rational way: not in absolute terms, but in relative terms.
What leads us to think about money in (less rational) relative terms rather than (more rational) absolute terms?
To start with, humans didn’t evolve to think about numbers, much less money, at all. Neither money nor numerical systems are omnipresent. Some cultures trade only by means of barter, and some have simple counting systems with only a few numerical terms, such as one, two, many. Clearly, both counting systems and money are cultural inventions. On the other hand, all vertebrate animals are built with what some psychologists call an “approximate system” for numbers, such that they can distinguish more from less. And that system in turn has the peculiar property of being “nonlinear”: the difference between 1 and 2 subjectively seems greater than the difference between 101 and 102. Much of the brain is built on this principle, known as Weber’s law. Thus, a 150-watt light bulb seems only a bit brighter than a 100-watt bulb, whereas a 100-watt bulb seems much brighter than a 50-watt bulb.
In some domains, following Weber’s law makes a certain amount of sense: a storehouse of an extra 2 kilos of wheat relative to a baseline of 100 kilos isn’t going to matter if everything after the first kilos ultimately spoils; what really matters is the difference between starving and not starving. Of course, money doesn’t rot (except in times of hyperinflation), but our brain didn’t evolve to cope with money; it evolved to cope with food.