Many authors since de Chardin have written about the creation of some sort of “overmind,” into which human consciousness might someday either evolve or subsume. Traditionally this is presented as a simple choice between obstinate individualism on the one hand, or being homogenized and absorbed on the other. I have always found this either-or dichotomy simplistic and tried to present a different point of view here. Still, the basic concept goes back a long way.
The idea for depicting a space shuttle, crash-landed on Easter Island, was provoked by a Lee Correy science fiction story, “Shuttle Down,” which appeared in Analog Magazine a decade ago.
Likewise, much of the discussion of human consciousness was inspired by articles in respectable neuroscience journals, or cribbed from innovative thinkers like Marvin Minsky, Stanley Ornstein, and even Julian Jaynes, whose famous book on the origin of consciousness might well have made a splendid science fiction novel.
The Helvetian War, on the other hand, I can blame on no one but myself. (I expect it will probably cause me no little grief.) Nevertheless, for this book I needed some dark, traumatic conflict to reverberate in my characters’ past — as Vietnam, World War II, and the Holocaust still make contemporary folk twitch in recollection. It had to be something at once both chilling and surprising, as so many events over the last fifty years have been. (And frankly, I’ve had it with stereotyped superpower schemes, accidental missile launches, any other cliches.) So I tried to come up with a scenario that — if not very likely — was at least plausible in its own context. Then I chose to center it around a nation that’s presently among the very last anyone would think of as a serious threat to peace. I don’t know if it works, but so far it has rocked a few people back and made them say, “Huh!” That’s good enough for me.
Speaking of war — one reader asked why I barely refer to one of today’s principle concerns… the Great Big War On Drugs. Will it have been solved by the year 2038?
Well, not by any program or approach now being tried, that’s for sure. I’m not fatalistic. It makes some sense to regulate when and how self-destructive citizens can stupefy themselves, especially in public. Social sanctions have already proven more effective than laws at driving down liquor and tobacco consumption in North America. So much that distillers and cigarette makers are in a state of demographic panic.
But as for trying to eradicate drugs, right now we just seem to be driving up the price. Addicts commit crimes to finance their habits, and convey billions of dollars to pushers who are, inarguably, among the worst human beings alive. Anyway, it’s been shown that some individuals can secrete endorphins and other hormones at will, using meditation or self hypnosis or biofeedback. If such techniques become commonplace (as no doubt they will… everything does), shall we then outlaw meditation? Should the police test anyone caught dozing in the park, to make sure he isn’t drugging himself with his own self-made enkephalins?
Reductio ad absurdum. Or as Dirty Harry once said, we’ve got to learn our limitations.
Which only leads to a much deeper problem that has plagued society ever since before Darwin. That problem is moral ambiguity.
Every culture before ours had codes that precisely defined acceptable behavior and prescribed sanctions to enforce obedience. Such rules, whether religious, or cultural, or legal, or traditional, were like those a parent imposes on a young child. (And which children themselves insist upon.) In other words, they were explicit, clear-cut, utterly unambiguous.
Eventually, some adolescents grow beyond needing perfect, delineated truths. They even learn to savor a little ambiguity. Meanwhile, others quail before it… or go to the opposite extreme, using ambiguity as an excuse to deny any ethical restraint at all. We see all three of these reactions in contemporary society as individuals and governments are asked to wrestle individually with complex issues formerly left to God.
For instance, while some insist that human life begins at the very moment of conception, others ideologically proclaim it absent until birth itself. Neither extreme represents the uncomfortable majority, who — supported by embryology — sense that the issue of abortion is being waged across a murky swamp, bereft of clear borders or road signs.
More quandaries abound. Has mankind yet “made life in a test tube”? That depends on how you define “life” of course. By one standard, that milestone was passed way back in the seventies. By another, it was reached in the mid-eighties. By yet a third, perhaps it hasn’t happened yet, but definitely will soon.
As the aged grow more numerous in industrial societies, and as the power and expense of modern medicine grow ever more spectacular, the question of death will also come to vex us. We’ve already spent a decade agonizing over the terminal patient’s “right to die” if faced with the alternative of prolonged, painful support by machinery. A consensus appears to be coalescing around that issue, but what about the next inevitable predicament… when young taxpayers of the next century find themselves paying for endless herculean care demanded by millions of octogenarian former baby-boomers who outnumber them, outvote them, and have spent all their lives used to getting whatever they wanted?
What will it even mean to be “dead” in the future? Some predict it may soon be possible to cool living human bodies down to near (or even past) freezing, suspending life processes, perhaps so people could be revived at a later date. In fact, by primitive standards, it’s already happened — for example, in cases of extreme hypothermia. The can of worms this might open is boggling to consider. And yet, enthusiasts for this nascent field of “cryonics” answer moral quandaries and strict definitions of death by asking, “Why pass binary laws for an analog world?” (In other words, most moral codes say “either-or”… while the universe itself seems to be filled instead with a whole lot of “maybes.”)
To some, this accelerating layering of complexity seems no more than a natural part of our culture’s maturation. To others, the prospect of all certainty dissolving into a muddle of ambiguity seems horrifying. If I were forced to make just one hard prediction for the twenty-first century, it would be that we have seen only the first wave of these puzzling, sometimes heartbreaking conundrums.
Will we face these issues head-on? Or flee once more to the shelter of ancient simplicities? That, I believe, will be the central moral and intellectual dilemma ahead of us.
Finally, let me close this rambling screed with a note on the central topic of this book. Much has been said in recent years about the so-called Gaia hypothesis, which though credited to James Lovelock, actually has a modern history stretching all the way back to the 1780s and the Scottish geologist James Hutton. Lately, there have been signs of compromise. Proponents have backed off a bit from comparing the planet too closely to a living organism, while critics like Richard Dawkins and James Kirchner now admit the debate over Gaia has been useful to ecology and biology, stimulating many new avenues of research.
In this novel, of course, I portray Gaia as more than a mere metaphor. Some of my scientist colleagues will surely shake their heads over my dramatic denouement, accusing me of “teleology” and other sins. And yet, doesn’t the renowned physicist llya Prigogine suggest that the ordering processes of “dissipative structures” almost inevitably lead to increasing levels of organization? Cambridge philosopher John Platt illustrates this progressive acceleration with one telling example — life’s ability to encapsulate itself.