People’s responses to this kind of thought experiment have led Jonathan Haidt to a new view of moral judgment, and one that prioritizes the moral gut. Haidt argues that our moral judgments of right and wrong, virtue, harm, and fairness, are the products of two kinds of processes. The first may seem fairly intuitive to you—it has occupied the thinking of those who have theorized about moral judgment for 2,000 years—and that is complex, deliberative reason. When we judge whether an action is right or wrong, we engage in many complex reasoning processes, we consider society-wide consequences, cost-benefit analyses, motives and intentions, and abstract principles like rights, freedoms, and duties. Psychological science has privileged these higher-order reasoning processes in accounts of moral judgment. This is no better typified than by the well-known theory of moral development of Harvard psychologist Lawrence Kohlberg. Beginning with his dissertation, Kohlberg argued that the highest forms of moral judgment require abstract considerations of rights, equality, and harm—achieved, in his research, by only 2 to 3 percent of individuals he studied around the world (most typically highly educated, upper-class males like himself!).
The second, more democratic element of moral judgment, almost completely ignored in psychological science, is the gut. Emotions provide rapid intuitions about fairness, harm, virtue, kindness, and purity. When you first reacted to the sex-with-chicken example, part of your response was most likely a rapid, ancient feeling of revulsion and disgust at the image of such a species-mixing, impure sexual practice. In one study, my first mentor, Phoebe Ellsworth, and I had individuals move their facial muscles, much as Ekman and colleagues did with the DFA, into the facial expression of anger or sadness. As participants held the expression, they made quick judgments about who was to blame for problems they might experience in the future in their romantic, work, and financial lives—other people or impersonal, situational factors. Those participants who made these judgments with an angry expression on their face blamed other people for the injustices. Those with faces configured into a sad expression attributed the same problems to fate and impersonal factors. Our moral judgments of blame are guided by sensations arising in the viscera and facial musculature.
Haidt reasons that thousands of generations of human social evolution have honed moral intuitions in the form of embodied emotions like compassion, gratitude, embarrassment, and awe. Emotions are powerful moral guides. They are upheavals that propel us to protect the foundations of moral communities—concerns over fairness, obligations, virtue, kindness, and reciprocity. Our capacity for virtue and concern over right and wrong are wired into our bodies.
If you are not convinced, consider the following neuroimaging study of Joshua Greene and colleagues, which suggests that the emotional and reasoning elements of moral judgment activate different regions of the brain. Participants judged different moral and nonmoral dilemmas in terms of whether they considered the action to be appropriate or not. Some moral dilemmas were impersonal and relatively unemotional. For example, in the “trolley dilemma” the participant imagines a runaway trolley headed for five people who will be killed if it proceeds on its course. The only way to save them is to hit a switch that will turn the trolley onto an alternate set of tracks, where it will kill one person instead of five. When asked to indicate whether it’s appropriate or not to hit that switch and save five lives, participants answer yes with little hesitation.
As an illustration of the emotionally evocative scenarios, consider the “footbridge dilemma.” Again five people’s lives are threatened by a runaway trolley. In this case the participant imagines standing next to a very heavy stranger on a footbridge over the trolley tracks. If the participant pushes the rotund stranger off the bridge with his own hands and onto the tracks, the stranger dies, but the train veers off its course, thus saving five lives (the participant’s own weight, it is explained, is insufficient to send the trolley off the track). Is it appropriate to push the stranger off the footbridge?
While participants responded to several dilemmas of this sort, functional magnetic resonance imaging techniques ascertained which parts of the participant’s brain were active. The personal moral dilemmas activated regions of the brain that previous research had found to be involved in emotion. The impersonal moral dilemmas and the nonmoral dilemmas activated brain regions associated with working memory, regions centrally involved in more deliberative reasoning.
When the Dalai Lama visited the gas chambers of Auschwitz, and reflected, stunned, upon this most horrific of human atrocities, he offered the following: “Events such as those which occurred at Auschwitz are violent reminders of what can happen when individuals—and by extension, whole societies—lose touch with basic human feeling.” His claim is that the direction of human culture—toward cooperation or genocide—rests upon being guided by basic moral feelings. Confucius was on the same page: “the ability to take one’s own feelings as a guide—that is the sort of thing that lies in the direction of jen.” Martha Nussbaum, bucking the trends of moral philosophy, concurs by arguing that emotions at their core contain judgments of value, about fairness, harm, rights, purity, reciprocity—all of the core ideas of moral and ethical living. Emotions are guides to moral reasoning, to ethical action in the fast, face-to-face exchanges of our social life. Reason and passion are collaborators in the meaningful life.
ENEMIES NO MORE
We often resort to thought experiments to discern the place of emotion in social life. Natural-state thought experiments plumb our intuition about what humans were like prior to culture, civilization, or guns, germs, and steel. Ideal-mind thought experiments—used in meditation and in philosophical exercises like the moral philosopher John Rawls’s veil of ignorance—ask us to envision the mind operating in ideal conditions, independent of the press of our own desires or the web of social relations we find ourselves in.
Emotions have not fared well in these thought experiments. Philosophers have most consistently argued that emotions should be extirpated from social life. This train of thought finds its clearest expression in the third century BC, with the Epicureans and Stoics; it extends to St. Augustine, St. Paul, and the Puritans, and on to many contemporary accounts of ethical living (for example, Ayn Rand). In the words of the influential American psychologist B. F. Skinner: “We all know that emotions are useless and bad for our peace of mind.”
If this brief philosophical history seems a bit arcane, consider the metaphors that we routinely use in the English language to explain our emotions to others (see table below), revealed by linguists Zoltán Kövecses and George Lakoff. We conceive of emotions as opponents (and not allies). Emotions are illnesses (and not sources of health). Emotions are forms of insanity (and not moments of understanding). We wrestle with, become ill from, and are driven mad by love, sadness, anger, guilt, shame, and even seemingly more beneficial states like amusement. The opposite locks up the Western mind: Imagine referring to anger, love, or gratitude as a friend, a form of health, or a kind of insight or clarity. We assume that emotions are lower, less sophisticated, more primitive ways of perceiving the world, especially when juxtaposed with loftier forms of reason.
METAPHORS OF EMOTIONS
Emotions = Opponents
I’m wrestling with my grief My enthusiasm got the best of me I couldn’t hold back my laughter