Выбрать главу

We humans may share many behaviors with lower animals, but flirting with a stereo speaker is surely not one of them. Or is it? We’ve seen that people unintentionally express their thoughts and feelings even when they might prefer to keep them secret, but do we also react automatically to nonverbal social cues? Do we respond, like the smitten cowbird, even in situations in which our logical and conscious minds would deem the reaction inappropriate or undesirable?

A few years ago, a Stanford communications professor named Clifford Nass sat a couple hundred computer-savvy students in front of computers that spoke to them in prerecorded voices.3 The purpose of the exercise, the students were told, was to prepare for a test with the assistance of a computerized tutoring session. The topics taught ranged from “mass media” to “love and relationships.” After completing the tutoring and the test, the students received an evaluation of their performance, delivered either by the same computer that taught them or by another computer. Finally, the students themselves completed the equivalent of a course evaluation form, in which they rated both the course and their computer tutor.

Nass was not really interested in conducting a computer course on mass media or love and relationships. These earnest students were Nass’s cowbirds, and in a series of experiments he and some colleagues studied them carefully, gathering data on the way they responded to the lifeless electronic computer, gauging whether they would react to a machine’s voice as if the machine had human feelings, motivations, or even a human gender. It would be absurd, of course, to expect the students to say “Excuse me” if they bumped into the monitor. That would be a conscious reaction, and in their conscious ruminations, these students certainly realized that the machine was not a person. But Nass was interested in another level of their behavior, behavior the students did not purposely engage in, social behavior he describes as “automatic and unconscious.”

In one of the experiments, the researchers arranged for half their subjects to be tutored and evaluated by computers with male voices, and half by computers with female voices. Other than that, there was no difference in the sessions—the male computers presented the same information in the same sequence as the females, and the male and female computers delivered identical assessments of the students’ performance. As we’ll see in Chapter 7, if the tutors had been real people, the students’ evaluations of their teachers would probably reflect certain gender stereotypes. For example, consider the stereotype that women know more about relationship issues than men. Ask a woman what bonds couples together, and you might expect her to respond, “Open communication and shared intimacy.” Ask a guy, and you might expect him to say, “Huh?” Studies show that as a result of this stereotype, even when a woman and a man have equal ability in that area, the woman is often perceived as more competent. Nash sought to discover whether the students would apply those same gender stereotypes to the computers.

They did. Those who had female-voiced tutors for the love-and-relationships material rated their teachers as having more sophisticated knowledge of the subject than did those who had male-voiced tutors, even though the two computers had given identical lessons. But the “male” and “female” computers got equal ratings when the topic was a gender-neutral one, like mass media. Another unfortunate gender stereotype suggests that forcefulness is desirable in men, but unseemly in women. And sure enough, students who heard a forceful male-voiced computer tutor rated it as being significantly more likable than those who heard a forceful female-voiced tutor, even though, again, both the male and the female voices had uttered the same words. Apparently, even when coming from a computer, an assertive personality in a female is more likely to come off as overbearing or bossy than the same personality in a male.

The researchers also investigated whether people will apply the social norms of politeness to computers. For example, when put in a position where they have to criticize someone face-to-face, people often hesitate or sugarcoat their true opinion. Suppose I ask my students, “Did you like my discussion of the stochastic nature of the foraging habits of wildebeests?” Judging from my experience, I’ll get a bunch of nods and a few audible murmurs. But no one will be honest enough to say, “Wildebeests? I didn’t hear a word of your boring lecture. But the monotonic drone of your voice did provide a soothing background as I surfed the web on my laptop.” Not even those who sat in the front row and clearly were surfing the web on their laptops would be that blunt. Instead, students save that kind of critique for their anonymous course-evaluation forms. But what if the one asking for the input was a talking computer? Would the students have the same inhibition against delivering a harsh judgment “face-to-face” to a machine? Nass and his colleagues asked half the students to enter their course evaluation on the same computer that had tutored them, and the other half to enter it on a different machine, a machine that had a different voice. Certainly the students would not consciously sugarcoat their words to avoid hurting the machine’s feelings—but as you probably guessed, they did indeed hesitate to criticize the computer to its “face.” That is, they rated the computer teacher as far more likable and competent when offering their judgment directly to that computer than when a different computer was gathering the input.4

Having social relations with a prerecorded voice is not a trait you’d want to mention in a job application. But, like the cowbirds, these students did treat it as if it were a member of their species, even though there was no actual person attached. Hard to believe? It was for the actual subjects. When, after some of the studies had been concluded, the researchers informed the students of the experiment’s true purpose, they all insisted with great confidence that they would never apply social norms to a computer.5 The research shows they were wrong. While our conscious minds are busy thinking about the meaning of the words people utter, our unconscious is busy judging the speaker by other criteria, and the human voice connects with a receiver deep within the human brain, whether that voice emanates from a human being or not.

PEOPLE SPEND A lot of time talking and thinking about how members of the opposite sex look but very little time paying attention to how they sound. To our unconscious minds, however, voice is very important. Our genus, Homo, has been evolving for a couple million years. Brain evolution happens over many thousands or millions of years, but we’ve lived in civilized society for less than 1 percent of that time. That means that though we may pack our heads full of twenty-first-century knowledge, the organ inside our skull is still a Stone Age brain. We think of ourselves as a civilized species, but our brains are designed to meet the challenges of an earlier era. Among birds and many other animals, voice seems to play a great role in meeting one of those demands—reproduction—and it seems to be similarly important in humans. As we’ll see, we pick up a great many sophisticated signals from the tone and quality of a person’s voice and from the cadence, but perhaps the most important way we relate to voice is directly analogous to the reaction of the cowbirds, for in humans, too, females are attracted to males by certain aspects of their “call.”