Now under either scenario — either when the FCC allocates spectrum or when it allocates property rights to spectrum — there is a role for the government. That role is most extensive when the FCC allocates spectrum: Then the FCC must decide who should get what. When spectrum is property, the FCC need only enforce the boundaries that the property right establishes. It is, in a way, a less troubling form of government action than the government deciding who it likes best.
Both forms of government regulation, however, produce a “press” (at least the press that uses spectrum) that is very different from the “press” at the founding. In 1791, the “press” was not the New York Times or the Wall Street Journal. It was not comprised of large organizations of private interests, with millions of readers associated with each organization. Rather, the press was much like the Internet today. The cost of a printing press was low, the readership was slight, the government subsidized its distribution, and anyone (within reason) could become a publisher. An extraordinary number did[77].
Spectrum licenses and spectrum property, however, produce a very different market. The cost of securing either becomes a barrier to entry. It would be like a rule requiring a “newspaper license” in order to publish a newspaper. If that license was expensive, then fewer could publish[78].
Of course, under our First Amendment it would be impossible to imagine the government licensing newspapers (at least if that license was expensive and targeted at the press). That’s because we all have a strong intuition that we want competition to determine which newspapers can operate, not artificial governmental barriers. And we all intuitively know that there’s no need for the government to “rationalize” the newspaper market. People are capable of choosing among competing newspapers without any help from the government.
So what if the same were true about spectrum? Most of us haven’t any clue about how what we call “spectrum” works. The weird sounds and unstable reception of our FM and AM radios make us think some kind of special magic happens between the station and receiver. Without that magic, radio waves would “interfere” with each other. Some special coordination is thought necessary to avoid such “collision” and the inevitable chaos that would result. Radio waves, in this view, are delicate invisible airplanes, which need careful air traffic controllers to make sure disaster doesn’t strike.
But what most of us think we know about radio is wrong. Radio waves aren’t butterflies. They don’t need the protection of the federal bureaucrats to do their work. And as technology that is totally familiar to everyone using the Internet demonstrates, there is in fact very little reason for either spectrum-licenses or spectrum-property. The invisible hand, here, can do all the work.
To get a clue about how, consider two contexts, at least one of which everyone is familiar with. No doubt, radio waves are different from sound waves. But for our purposes here, the following analogy works.
Imagine you’re at a party. There are 50 people in the room, and each of them is talking. Each is therefore producing sound waves. But though these many speakers produce different sound waves, we don’t have any trouble listening to the person speaking next to us. So long as no one starts shouting, we can manage to hear quite well. More generally, a party (at least early in the evening) is comprised of smart speakers and listeners who coordinate their speaking so that most everyone in the room can communicate without any real trouble.
Radios could function similarly — if the receiver and transmitter were analogously intelligent. Rather than the dumb receivers that ordinary FM or AM radio relies upon, smart radios could figure out what to listen to and communicate with just as people at a party learn to focus on the conversation they’re having.
The best evidence of this is the second example I offer to dislodge the common understanding of how spectrum works. This example is called “WiFi.” WiFi is the popular name of a particular set of protocols that together enable computers to “share” bands of unlicensed spectrum. The most popular of these bands are in the 2.5 GHz and 5 GHz range. WiFi enables a large number of computers to use that spectrum to communicate.
Most of the readers of this book have no doubt come across WiFi technology. I see it every day I teach: a room full of students, each with a laptop, the vast majority on the Internet — doing who knows what. The protocols within each machine enable them all to “share” a narrow band of spectrum. There is no government or regulator that tells which machine when it can speak, any more than we need the government to make sure that people can communicate at cocktail parties.
These examples are of course small and limited. But there is literally a whole industry now devoted to spreading the lesson of this technology as broadly as possible. Some theorists believe the most efficient use of all spectrum would build upon these models — using ultra-wide-band technologies to maximize the capacity of radio spectrum. But even those who are skeptical of spectrum utopia are coming to see that our assumptions about how spectrum must be allocated are driven by ignorance about how spectrum actually works.
The clearest example of this false assumption is the set of intuitions we’re likely to have about the necessary limitations in spectrum utilization. These assumptions are reinforced by the idea of spectrum-property. The image we’re likely to have is of a resource that can be overgrazed. Too many users can clog the channels, just as too many cattle can overgraze a field.
Congestion is certainly a possible consequence of spectrum usage. But the critical point to recognize — and again, a point that echoes throughout this book — is that the possibility congestion depends upon the design. WiFi networks can certainly become congested. But a different architecture for “sharing” spectrum need not. Indeed, under this design, more users don’t deplete capacity — they increase it[79].
The key to making this system possible is for every receiver to become a node in the spectrum architecture. Users then wouldn’t be just consumers of someone else’s broadcast. Instead, receivers are now also broadcasters. Just as peer-to-peer technologies such as BitTorrent harness the bandwidth of users to share the cost of distributing content, users within a certain mesh-network architecture for spectrum could actually increase the spectrum capacity of the network. Under this design, then, the more who use the spectrum, the more spectrum there is for others to use — producing not a tragedy of the commons, but a comedy of the commons.
The basic architecture of this mesh system imagines every computer in the system is both a receiver and a transmitter. Of course, in one sense, that’s what these machines already are — a computer attached to a WiFi network both receives transmissions from and sends transmissions to the broadcasting node. But that architecture is a 1-to-many broadcasting architecture. The mesh architecture is something different. In a mesh architecture, each radio can send packets of data to any other radio within the mesh. Or, put differently, each is a node in the network. And with every new node, the capacity of the network could increase. In a sense, this is precisely the architecture of much of the Internet. Machines have addresses; they collect packets addressed to that machine from the Net[80]. Your machine shares the Net with every other machine, but the Net has a protocol about sharing this commons. Once this protocol is agreed on, no further regulation is required.
77.
Paul Starr,
78.
Yochai Benkler, "Net Regulation: Taking Stock and Looking Forward,"
79.
See, for example, research at MIT to build viral mesh networks which increase in capacity as the number of users increases. Collaborative (Viral) Wireless Networks, available at http://web.media.mit.edu/~aggelos/viral.html (cached: http://www.webcitation.org/5J6nWkYbP).
80.
Ethernet effectively functions like this. Data on an Ethernet network are streamed into each machine on that network. Each machine sniffs the data and then pays attention to the data intended for it. This process creates an obvious security hole: "sniffers" can be put on "promiscuous mode" and read packets intended for other machines; see Loshin,