The effect of each of these strategies would be to increase the prevalence of digital IDs. And at some point, there would be a tipping. There is an obvious benefit to many on the Net to be able to increase confidence about the entity with whom they are dealing. These digital IDs would be a tool to increase that confidence. Thus, even if a site permits itself to be accessed without any certification by the user, any step beyond that initial contact could require carrying the proper ID. The norm would be to travel in cyberspace with an ID; those who refuse would find the cyberspace that they could inhabit radically reduced.
The consequence of this tipping would be to effectively stamp every action on the Internet — at a minimum — with a kind of digital fingerprint. That fingerprint — at a minimum — would enable authorities to trace any action back to the party responsible for it. That tracing — at a minimum — could require judicial oversight before any trace could be effected. And that oversight — at a minimum — could track the ordinary requirements of the Fourth Amendment.
At a minimum. For the critical part in this story is not that the government could induce an ID-rich Internet. Obviously it could. Instead, the important question is the kind of ID-rich Internet the government induces.
Compare two very different sorts of digital IDs, both of which we can understand in terms of the “wallet” metaphor used in Chapter 4 to describe the evolving technology of identity that Microsoft is helping to lead.
One sort of ID would work like this: Every time you need to identify yourself, you turn over your wallet. The party demanding identification rummages through the wallet, gathering whatever data he wants.
The second sort of ID works along the lines of the Identity Layer described in Chapter 4: When you need to identify yourself, you can provide the minimal identification necessary. So if you need to certify that you’re an American, only that bit gets revealed. Or if you need to certify that you’re over 18, only that fact gets revealed.
On the model of the second form of the digital ID, it becomes possible to imagine then an ultra-minimal ID — an identification that reveals nothing on its face, but facilitates traceability. Again, a kind of digital fingerprint which is meaningless unless decoded, and, once decoded, links back to a responsible agent.
These two architectures stand at opposite ends of a spectrum. They produce radically different consequences for privacy and anonymity. Perfect anonymity is possible with neither; the minimal effect of both is to make behavior traceable. But with the second mode, that traceability itself can be heavily regulated. Thus, there should be no possible traceability when the only action at issue is protected speech. And where a trace is to be permitted, it should only be permitted if authorized by proper judicial action. Thus the system would preserve the capacity to identify who did what when, but it would only realize that capacity under authorized circumstances.
The difference between these two ID-enabled worlds, then, is all the difference in the world. And critically, which world we get depends completely upon the values that guide the development of this architecture. ID-type 1 would be a disaster for privacy as well as security. ID-type 2 could radically increase privacy, as well as security, for all except those whose behavior can legitimately be tracked.
Now, the feasibility of the government effecting either ID depends crucially upon the target of regulation. It depends upon there being an entity responsible for the code that individuals use, and it requires that these entities can be effectively regulated. Is this assumption really true? The government may be able to regulate the telephone companies, but can it regulate a diversity of code writers? In particular, can it regulate code writers who are committed to resisting precisely such regulation?
In a world where the code writers were the sort of people who governed the Internet Engineering Task Force[29] of a few years ago, the answer is probably no. The underpaid heroes who built the Net have ideological reasons to resist government’s mandate. They were not likely to yield to its threats. Thus, they would provide an important check on the government’s power over the architectures of cyberspace.
But as code writing becomes commercial — as it becomes the product of a smaller number of large companies — the government’s ability to regulate it increases. The more money there is at stake, the less inclined businesses (and their backers) are to bear the costs of promoting an ideology.
The best example is the history of encryption. From the very start of the debate over the government’s control of encryption, techies have argued that such regulations are silly. Code can always be exported; bits know no borders. So the idea that a law of Congress would control the flow of code was, these people argued, absurd.
The fact is, however, that the regulations had a substantial effect. Not on the techies — who could easily get encryption technologies from any number of places on the Net — but on the businesses writing software that would incorporate such technology. Netscape or IBM was not about to build and sell software in violation of U.S. regulations. The United States has a fairly powerful threat against these two companies. As the techies predicted, regulation did not control the flow of bits. But it did quite substantially inhibit the development of software that would use these bits.[30]
The effect has been profound. Companies that were once bastions of unregulability are now becoming producers of technologies that facilitate regulation. For example, Network Associates, inheritor of the encryption program PGP, was originally a strong opponent of regulation of encryption; now it offers products that facilitate corporate control of encryption and recovery of keys.[31] Key recovery creates a corporate back door, which, in many contexts, is far less restricted than a governmental back door.
Cisco is a second example.[32] In 1998 Cisco announced a router product that would enable an ISP to encrypt Internet traffic at the link level — between gateways, that is.[33] But this router would also have a switch that would disable the encryption of the router data and facilitate the collection of unencrypted Internet traffic. This switch could be flipped at the government’s command; in other words, the data would be encrypted only when the government allowed it to be.
The point in both cases is that the government is a player in the market for software. It affects the market both by creating rules and by purchasing products. Either way, it influences the supply of commercial software providers who exist to provide what the market demands.
Veterans of the early days of the Net might ask these suppliers, “How could you?”
“It’s just business”, is the obvious reply.
East Coast and West Coast Codes
Throughout this section, I’ve been speaking of two sorts of code. One is the “code” that Congress enacts (as in the tax code or “the U.S. Code”). Congress passes an endless array of statutes that say in words how to behave. Some statutes direct people; others direct companies; some direct bureaucrats. The technique is as old as government itself: using commands to control. In our country, it is a primarily East Coast (Washington, D.C.) activity. Call it “East Coast Code.”
29.
See the description in Scott Bradner, "The Internet Engineering Task Force," in
30.
Michael Froomkin makes a similar point: "Export control rules have had an effect on the domestic market for products with cryptographic capabilities such as e-mail, operating systems, and word processors. Largely because of the ban on export of strong cryptography, there is today no strong mass-market standard cryptographic product within the U.S. even though a considerable mathematical and programming base is fully capable of creating one"; "It Came from Planet Clipper," 19.
31.
See "Network Associates and Key Recovery," available at http://web.archive.org/web/19981207010043/http://www.nai.com/products/security/key.asp (cached: http://www.webcitation.org/5KytMd1L8).
32.
Cisco has developed products that incorporate the use of network-layer encryption through the IP Security (IPSec) protocol. For a brief discussion of IPSec, see Cisco Systems, Inc., "IP Security–IPSec Overview," available at http://web.archive.org/web/19991012165050/http://cisco.com/warp/public/cc/cisco/mkt/ios/tech/security/prodlit/ipsec_ov.htm (cached: http://www.webcitation.org/5Iwt19135). For a more extensive discussion, see Cisco Systems, Inc., "Cisco IOS Software Feature: Network-Layer Encryption — White Paper"; Cisco Systems, Inc. "IPSec — White Paper," available at http://web.archive.org/web/20020202003100/http://www.cisco.com/warp/public/cc/techno/protocol/ipsecur/ipsec/tech/ipsec_wp.htm (cached: http://www.webcitation.org/5Iwt3l7WB); see also Dawn Bushaus, "Encryption Can Help ISPs Deliver Safe Services,"
33.
See Internet Architectural Board statement on "private doorbell" encryption, available at http://www.iab.org/documents/docs/121898.html (cached: http://www.webcitation.org/5Iwt6RMid).