Arthur Byron Cover
Prodigy
The Sense Of Humor
by Isaac Asimov
Would a robot feel a yearning to be human?
You might answer that question with a counter-question. Does a Chevrolet feel a yearning to be a Cadillac?
The counter-question makes the unstated comment that a machine has no yearnings.
But the very point is that a robot is not quite a machine, at least in potentiality. A robot is a machine that is made as much like a human being as it is possible to make it, and somewhere there may be a boundary line that may be crossed.
We can apply this to life. An earthworm doesn't yearn to be a snake; a hippopotamus doesn't yearn to be an elephant. We have no reason to think such creatures are self-conscious and dream of something more than they are. Chimpanzees and gorillas seem to be self-aware, but we have no reason to think that they yearn to be human.
A human being, however, dreams of an afterlife and yearns to become one of the angels. Somewhere, life crossed a boundary line. At some point a species arose that was not only aware of itself but had the capacity to be dissatisfied with itself.
Perhaps a similar boundary line will someday be crossed in the construction of robots.
But if we grant that a robot might someday aspire to humanity, in what way would he so aspire? He might aspire to the possession of the legal and social status that human beings are born to. That was the theme of my story "The Bicentennial Man" (1976), and in his pursuit of such status, my robot-hero was willing to give up all his robotic qualities, one by one, right down to his immortality.
That story, however, was more philosophical than realistic. What is there about a human being that a robot might properly envy-what human physical or mental characteristic? No sensible robot would envy human fragility, or human incapacity to withstand mild changes in the environment, or human need for sleep, or aptitude for the trivial mistake, or tendency to infectious and degenerative disease, or incapacitation through illogical storms of emotion.
He might, more properly, envy the human capacity for friendship and love, his wide-ranging curiosity, his eagerness for experience. I would like to suggest, though, that a robot who yearned for humanity might well find that what he would most want to understand, and most frustratingly Jail to understand, would be the human sense of humor.
The sense of humor is by no means universal among human beings, though it does cut across all cultures. I have known many people who didn't laugh, but who looked at you in puzzlement or perhaps disdain if you tried to be funny. I need go no further than my father, who routinely shrugged off my cleverest sallies as unworthy of the attention of a serious man. (Fortunately, my mother laughed at all my jokes, and most uninhibitedly, or I might have grown up emotionally stunted.)
The curious thing about the sense of humor, however, is that, as far as I have observed, no human being will admit to its lack. People might admit they hate dogs and dislike children, they might cheerfully own up to cheating on their income tax or on their marital partner as a matter of right, and might not object to being considered inhumane or dishonest, through the simple expediency of switching adjectives and calling themselves realistic or businesslike.
However, accuse them of lacking a sense of humor and they will deny it hotly every time, no matter how openly and how often they display such a lack. My father, for instance, always maintained that he had a keen sense of humor and would prove it as soon as he heard a joke worth laughing at (though he never did, in my experience). Why, then, do people object to being accused of humorlessness? My theory is that people recognize (subliminally, if not openly) that a sense of humor is typically human, more so than any other characteristic, and refuse demotion to subhumanity.
Only once did I take up the matter of a sense of humor in a science-fiction. story, and that was in my story "Jokester," which first appeared in the December, 1956 issue of Infinity Science Fiction and which was most recently reprinted in my collection The Best Science Fiction of Isaac Asimov (Doubleday, 1986).
The protagonist of the story spent his time telling jokes to a computer (I quoted six of them in the course of the story). A computer, of course, is an immobile robot; or, which is the same thing, a robot is a mobile computer; so the story deals with robots and jokes. Unfortunately, the problem in the story for which a solution was sought was not the nature of humor, but the source of all the jokes one hears. And there is an answer, too, but you'll have to read the story for that.
However, I don't just write science fiction. I write whatever it falls into my busy little head to write, and (by some undeserved stroke of good fortune) my various publishers are under the weird impression that it is illegal not to publish any manuscript I hand them. (You can be sure that I never disabuse them of this ridiculous notion.)
Thus, when I decided to write a joke book, I did, and Houghton-Mifflin published it in 1971 under the title of Isaac Asimov's Treasury of Humor. In it, I told 640 jokes that I happened to have as part of my memorized repertoire. (I also have enough for a sequel to be entitled Isaac Asimov Laughs Again, but I can't seem to get around to writing it no matter how long I sit at the keyboard and how quickly I manipulate the keys.) I interspersed those jokes with my own theories concerning what is funny and how one makes what is funny even funnier.
Mind you, there are as many different theories of humor as there are people who write on the subject, and no two theories are alike. Some are, of course, much stupider than others, and I felt no embarrassment whatever in adding my own thoughts on the subject to the general mountain of commentary.
It is my feeling, to put it as succinctly as possible, that the one necessary ingredient in every successful joke is a sudden alteration in point of view. The more radical the alteration, the more suddenly it is demanded, the more quickly it is seen, the louder the laugh and the greater the joy.
Let me give you an example with a joke that is one of the few I made up myself:
Jim comes into a bar and finds his best friend, Bill, at a comer table gravely nursing a glass of beer and wearing a look of solemnity on his face. Jim sits down at the table and says sympathetically, "What's the matter, Bill?"
Bill sighs, and says, "My wife ran off yesterday with my best friend."
Jim says, in a shocked voice, "What are you talking about, Bill? I'm your best friend."
To which Bill answers softly, "Not anymore."
I trust you see the change in point of view. The natural supposition is that poor Bill is sunk in gloom over a tragic loss. It is only with the last three words that you realize, quite suddenly, that he is, in actual fact, delighted. And the average human male is sufficiently ambivalent about his wife (however beloved she might be) to greet this particular change in point of view with delight of his own.
Now, if a robot is designed to have a brain that responds to logic only (and of what use would any other kind of robot brain be to humans who are hoping to employ robots for their own purposes?), a sudden change in point of view would be hard to achieve. It would imply that the rules of logic were wrong in the first place or were capable of a flexibility that they obviously don't have. In addition, it would be dangerous to build ambivalence into a robot brain. What we want from him is decision and not the to-be-or-not-to-be of a Hamlet.
Imagine, then, telling a robot the joke I have just given you, and imagine the robot staring at you solemnly after you are done, and questioning you, thus.
Robot: "But why is Jim no longer Bill's best friend? You have not described Jim as doing anything that would cause Bill to be angry with him or disappointed in him."