But the efficiency of communication in cyberspace means that the cost of sending spam is radically cheaper, which radically increases the quantity of spam that it is rational to send. Even if you make only a .01% profit, if the cost of sending the spam is close to zero, you still make money.
Thus, as with porn, a different architectural constraint means a radically different regulation of behavior. Both porn and spam are reasonably regulated in real space; in cyberspace, this difference in architecture means neither is effectively regulated at all.
And thus the question that began this section: Is there a way to “regulate” spam and porn to at least the same level of regulation that both face in real space?
Regulating Net-Porn
Of all the possible speech regulations on the Net (putting copyright to one side for the moment), the United States Congress has been most eager to regulate porn. That eagerness, however, has not yet translated into success. Congress has passed two pieces of major legislation. The first was struck down completely. The second continues to be battered down in its struggle through the courts.
The first statute was the product of a scare. Just about the time the Net was coming into the popular consciousness, a particularly seedy aspect of the Net came into view first. This was porn on the Net. This concern became widespread in the United States early in 1995[36]. Its source was an extraordinary rise in the number of ordinary users of the Net, and therefore a rise in use by kids and an even more extraordinary rise in the availability of what many call porn on the Net. An extremely controversial (and deeply flawed) study published in the Georgetown University Law Review reported that the Net was awash in porn[37]. Time ran a cover story about its availability[38]. Senators and congressmen were bombarded with demands to do something to regulate “cybersmut.”
Congress responded in 1996 with the Communications Decency Act (CDA). A law of extraordinary stupidity, the CDA practically impaled itself on the First Amendment. The law made it a felony to transmit “indecent” material on the Net to a minor or to a place where a minor could observe it. But it gave speakers on the Net a defense — if they took good-faith, “reasonable, effective” steps to screen out children, then they could speak “indecently[39]”.
There were at least three problems with the CDA, any one of which should have doomed it to well-deserved extinction[40]. The first was the scope of the speech it addressed: “Indecency” is not a category of speech that Congress has the power to regulate (at least not outside the context of broadcasting)[41]. As I have already described, Congress can regulate speech that is “harmful to minors”, or Ginsberg speech, but that is very different from speech called “indecent.” Thus, the first strike against the statute was that it reached too far.
Strike two was vagueness. The form of the allowable defenses was clear: So long as there was an architecture for screening out kids, the speech would be permitted. But the architectures that existed at the time for screening out children were relatively crude, and in some cases quite expensive. It was unclear whether, to satisfy the statute, they had to be extremely effective or just reasonably effective given the state of the technology. If the former, then the defenses were no defense at all, because an extremely effective block was extremely expensive; the cost of a reasonably effective block would not have been so high.
Strike three was the government’s own doing. In arguing its case before the Supreme Court in 1997, the government did little either to narrow the scope of the speech being regulated or to expand the scope of the defenses. It stuck with the hopelessly vague, overbroad definition Congress had given it, and it displayed a poor understanding of how the technology might have provided a defense. As the Court considered the case, there seemed to be no way that an identification system could satisfy the statute without creating an undue burden on Internet speakers.
Congress responded quickly by passing a second statute aimed at protecting kids from porn. This was the Child Online Protect ion Act (COPA) of 1998[42]. This statute was better tailored to the constitutional requirements. It aimed at regulating speech that was harmful to minors. It allowed commercial websites to provide such speech so long as the website verified the viewer’s age. Yet in June 2003, the Supreme Court enjoined enforcement of the statute[43].
Both statutes respond to a legitimate and important concern. Parents certainly have the right to protect their kids from this form of speech, and it is perfectly understandable that Congress would want to help parents secure this protection.
But both statutes by Congress are unconstitutional — not, as some suggest, because there is no way that Congress could help parents. Instead both are unconstitutional because the particular way that Congress has tried to help parents puts more of a burden on legitimate speech (for adults that is) than is necessary.
In my view, however, there is a perfectly constitutional statute that Congress could pass that would have an important effect on protecting kids from porn.
To see what that statute looks like, we need to step back a bit from the CDA and COPA to identify what the legitimate objectives of this speech regulation would be.
Ginsberg[44] established that there is a class of speech that adults have a right to but that children do not. States can regulate that class to ensure that such speech is channeled to the proper user and blocked from the improper user.
Conceptually, for such a regulation can work, two questions must be answered:
Is the speaker uttering “regulable” speech — meaning speech “harmful to minors”?
Is the listener entitled to consume this speech — meaning is he a minor?
And with the answers to these questions, the logic of this regulation is:
IF
(speech == regulable)
AND
(listener == minor)
THEN
block access.
Now between the listener and the speaker, clearly the speaker is in a better position to answer question #1. The listener can’t know whether the speech is harmful to minors until the listener encounters the speech. If the listener is a minor, then it is too late. And between the listener and the speaker, clearly the listener is in a better position to answer question #2. On the Internet especially, it is extremely burdensome for the speaker to certify the age of the listener. It is the listener who knows his age most cheaply.
The CDA and COPA placed the burden of answering question #1 on the speaker, and #2 on both the speaker and the listener. A speaker had to determine whether his speech was regulable, and a speaker and a listener had to cooperate to verify the age of the listener. If the speaker didn’t, and the listener was a minor, then the speaker was guilty of a felony.
Real-space law also assigns the burden in exactly the same way. If you want to sell porn in New York, you both need to determine whether the content you’re selling is “harmful to minors”, and you need to determine whether the person you’re selling to is a minor. But real space is importantly different from cyberspace, at least in the high cost of answering question #2: In real space, the answer is almost automatic (again, it’s hard for a kid to hide that he’s a kid). And where the answer is not automatic, there’s a cheap system of identification (a driver’s license, for example). But in cyberspace, any mandatory system of identification constitutes a burden both for the speaker and the listener. Even under COPA, a speaker has to bear the burden of a credit card system, and the listener has to trust a pornographer with his credit card just to get access to constitutionally protected speech.
36.
See Blake T. Bilstad, "Obscenity and Indecency in a Digital Age: The Legal and Political Implications of Cybersmut, Virtual Pornography, and the Communications Decency Act of 1996,"
37.
Marty Rimm, "Marketing Pornography on the Information Superhighway: A Survey of 917,410 Images, Descriptions, Short Stories, and Animations Downloaded 8.5 Million Times by Consumers in over 2,000 Cities in Forty Countries, Provinces, and Territories,"
38.
See Philip Elmer-DeWitt, "On a Screen Near You: Cyberporn — It's Popular, Pervasive, and Surprisingly Perverse, According to the First Survey of Online Erotica — And There's No Easy Way to Stamp It Out,"
40.
The law was extinguished (at least in part) at 521 US 844 (1997); see Eugene Volokh, "Freedom of Speech, Shielding Children, and Transcending Balancing,"
41.
See
42.