Выбрать главу

My point is the same. I suggest we learn something if we think about the “bot man” theory of regulation — one focused on the regulation of code. We will learn something important, in other words, if we imagine the target of regulation as a maximizing entity, and consider the range of tools the regulator has to control that machine.

Code will be a central tool in this analysis. It will present the greatest threat to both liberal and libertarian ideals, as well as their greatest promise. We can build, or architect, or code cyberspace to protect values that we believe are fundamental. Or we can build, or architect, or code cyberspace to allow those values to disappear. There is no middle ground. There is no choice that does not include some kind of building. Code is never found; it is only ever made, and only ever made by us. As Mark Stefik puts it, “Different versions of cyberspace support different kinds of dreams. We choose, wisely or not.[9]” Or again, code “determines which people can access which digital objects . . . How such programming regulates human interactions . . . depends on the choices made.[10]” Or, more precisely, a code of cyberspace, defining the freedoms and controls of cyberspace, will be built. About that there can be no debate. But by whom, and with what values? That is the only choice we have left to make.

My argument is not for some top-down form of control. The claim is not that regulators must occupy Microsoft. A constitution envisions an environment; as Justice Holmes said, it “calls into life a being the development of which cannot be foreseen.[11]” Thus, to speak of a constitution is not to describe a hundred-day plan. It is instead to identify the values that a space should guarantee. It is not to describe a “government”; it is not even to select (as if a single choice must be made) between bottom-up or top-down control. In speaking of a constitution in cyberspace we are simply asking: What values should be protected there? What values should be built into the space to encourage what forms of life?

The “values” at stake here are of two sorts — substantive and structural. In the American constitutional tradition, we worried about the second first. The framers of the Constitution of 1787 (enacted without a Bill of Rights) were focused on structures of government. Their aim was to ensure that a particular government (the federal government) did not become too powerful. And so they built into the Constitution’s design checks on the power of the federal government and limits on its reach over the states.

Opponents of that Constitution insisted that more checks were needed, that the Constitution needed to impose substantive limits on government’s power as well as structural limits. And thus was the Bill of Rights born. Ratified in 1791, the Bill of Rights promised that the federal government would not remove certain freedoms — of speech, privacy, and due process. And it guaranteed that the commitment to these substantive values would remain despite the passing fancies of normal, or ordinary, government. These values — both substantive and structural — were thus entrenched through our constitutional design. They can be changed, but only through a cumbersome and costly process.

We face the same questions in constituting cyberspace, but we have approached them from the opposite direction[12]. Already we are struggling with substance: Will cyberspace promise privacy or access? Will it enable a free culture or a permission culture? Will it preserve a space for free speech? These are choices of substantive value, and they are the subject of much of this book.

But structure matters as well, though we have not even begun to understand how to limit, or regulate, arbitrary regulatory power. What “checks and balances” are possible in this space? How do we separate powers? How do we ensure that one regulator, or one government, doesn’t become too powerful? How do we guarantee it is powerful enough?

Theorists of cyberspace have been talking about these questions since its birth[13]. But as a culture, we are just beginning to get it. As we slowly come to see how different structures within cyberspace affect us — how its architecture, in a sense I will define below, “regulates” us — we slowly come to ask how these structures should be defined. The first generation of these architectures was built by a noncommercial sector — researchers and hackers, focused upon building a network. The second generation has been built by commerce. And the third, not yet off the drawing board, could well be the product of government. Which regulator do we prefer? Which regulators should be controlled? How does society exercise that control over entities that aim to control it?

In Part III, I bring these questions back down to the ground. I consider three areas of controversy — intellectual property, privacy, and free speech — and identify the values within each that cyberspace will change. These values are the product of the interaction between law and technology. How that interaction plays out is often counter-intuitive. My aim in this part is to map that interaction, so as to map a way that we might, using the tools of Part II, preserve the values that are important to us within each context.

Part IV internationalizes these questions. Cyberspace is everywhere, meaning those who populate cyberspace come from everywhere. How will the sovereigns of everywhere live with the claimed “sovereignty” of cyberspace? I map a particular response that seems to me inevitable, and will reinforce the conclusion of Part I.

The final part, Part V, is the darkest. The central lesson of this book is that cyberspace requires choices. Some of these are, and should be, private: Whether an author wants to enforce her copyright; how a citizen wants to protect his privacy. But some of these choices involve values that are collective. I end by asking whether we — meaning Americans — are up to the challenge that these choices present. Are we able to respond rationally — meaning both (1) are we able to respond without undue or irrational passion, and (2) do we have institutions capable of understanding and responding to these choices?

My strong sense is that we are not, at least now, able to respond rationally to these challenges. We are at a stage in our history when we urgently need to make fundamental choices about values, but we should trust no institution of government to make such choices. Courts cannot do it, because as a legal culture we don’t want courts choosing among contested matters of values. Congress should not do it because, as a political culture, we are deeply skeptical (and rightly so) about the product of this government. There is much to be proud of in our history and traditions. But the government we now have is a failure. Nothing important should be trusted to its control, even though everything important is.

Change is possible. I don’t doubt that revolutions remain in our future. But I fear that it is too easy for the government, or specially powered interests, to dislodge these revolutions, and that too much will be at stake for it to allow real change to succeed. Our government has already criminalized the core ethic of this movement, transforming the meaning of hacker into something quite alien to its original sense. Through extremism in copyright regulation, it is criminalizing the core creativity that this network could produce. And this is only the beginning.

Things could be different. They are different elsewhere. But I don’t see how they could be different for us just now. This no doubt is simply a confession of the limits of my own imagination. I would be grateful to be proven wrong. I would be grateful to watch as we relearn — as the citizens of the former Communist republics are learning — how to escape these disabling ideas about the possibilities for governance. But nothing in the past decade, and especially nothing in the past five years, has convinced me that my skepticism about governance was misplaced. Indeed, events have only reinforced that pessimism.

вернуться

9.

Mark Stefik, "Epilogue: Choices and Dreams," in Internet Dreams: Archetypes, Myths, and Metaphors, edited by Mark Stefik (Cambridge, Mass.: MIT Press, 1996), 390.

вернуться

10.

Mark Stefik, The Internet Edge: Social, Technical, and Legal Challenges for a Net worked World (Cambridge: MIT Press, 1999), 14.

вернуться

11.

Missouri v. Holland, 252 US 416, 433 (1920).

вернуться

12.

This debate is nothing new to the American democracy. See Does Technology Drive History?: The Dilemma of Technological Determinism, Merritt Roe Smith and Leo Marx eds. (Cambridge: MIT Press, 1994), 1–35 ("If carried to extremes, Jefferson worried, the civilizing process of large-scale technology and industrialization might easily be corrupted and bring down the moral and political economy he and his contemporaries had worked so hard to erect").

вернуться

13.

Richard Stallman, for example, organized resistance to the emergence of passwords at MIT. Passwords are an architecture that facilitates control by excluding users not "officially sanctioned." Steven Levy, Hackers (Garden City, N.Y.: Anchor Press/Doubleday, 1984), 422–23.