It is this view that I want to work hardest to dislodge, because built within it are all the mistakes that a pre-cyberlaw understanding brings to the question of regulation in cyberspace.
First, consider the word “censorship.” What this regulation would do is give parents the opportunity to exercise an important choice. Enabling parents to do this has been deemed a compelling state interest. The kids who can’t get access to this content because their parents exercised this choice might call it “censorship”, but that isn’t a very useful application of the term. If there is a legitimate reason to block this form of access, that’s speech regulation. There’s no reason to call it names.
Second, consider the preference for “voluntary filters.” If voluntary filters were to achieve the very same end (blocking H2M speech and only H2M speech), I’d be all for them. But they don’t. As the ACLU quite powerfully described (shortly after winning the case that struck down the CDA partly on the grounds that private filters were a less restrictive means than government regulation):
The ashes of the CDA were barely smoldering when the White House called a summit meeting to encourage Internet users to self-rate their speech and to urge industry leaders to develop and deploy the tools for blocking “inappropriate speech.” The meeting was “voluntary”, of course: the White House claimed it wasn’t holding anyone’s feet to the fire. But the ACLU and others . . . were genuinely alarmed by the tenor of the White House summit and the unabashed enthusiasm for technological fixes that will make it easier to block or render invisible controversial speech. . . . It was not any one proposal or announcement that caused our alarm; rather, it was the failure to examine the longer-term implications for the Internet of rating and blocking schemes[46].
The ACLU’s concern is the obvious one: The filters that the market has created not only filter much more broadly than the legitimate interest the state has here — blocking <H2M> speech — they also do so in a totally nontransparent way. There have been many horror stories of sites being included in filters for all the wrong reasons (including for simply criticizing the filter)[47]. And when you are wrongfully blocked by a filter, there’s not much you can do. The filter is just a particularly effective recommendation list. You can’t sue Zagat’s just because they steer customers to your competitors.
My point is not that we should ban filters, or that parents shouldn’t be allowed to block more than H2M speech. My point is that if we rely upon private action alone, more speech will be blocked than if the government acted wisely and efficiently.
And that frames my final criticism: As I’ve argued from the start, our focus should be on the liberty to speak, not just on the government’s role in restricting speech. Thus, between two “solutions” to a particular speech problem, one that involves the government and suppresses speech narrowly, and one that doesn’t involve the government but suppresses speech broadly, constitutional values should tilt us to favor the former. First Amendment values (even if not the First Amendment directly) should lead to favoring a speech regulation system that is thin and accountable, and in which the government’s action or inaction leads only to the suppression of speech the government has a legitimate interest in suppressing. Or, put differently, the fact that the government is involved should not necessarily disqualify a solution as a proper, rights-protective solution.
The private filters the market has produced so far are both expensive and over-inclusive. They block content that is beyond the state’s interest in regulating speech. They are effectively subsidized because there is no less restrictive alternative.
Publicly required filters (which are what the <H2M> tag effectively enables) are narrowly targeted on the legitimate state interest. And if there is a dispute about that tag — if for example, a prosecutor says a website with information about breast cancer must tag the information with an <H2M> tag — then the website at least has the opportunity to fight that. If that filtering were in private software, there would be no opportunity to fight it through legal means. All that free speech activists could then do is write powerful, but largely invisible, articles like the ACLU’s famous plea.
It has taken key civil rights organizations too long to recognize this private threat to free-speech values. The tradition of civil rights is focused directly on government action alone. I would be the last to say that there’s not great danger from government misbehavior. But there is also danger to free speech from private misbehavior. An obsessive refusal to even consider the one threat against the other does not serve the values promoted by the First Amendment.
But then what about public filtering technologies, like PICS? Wouldn’t PICS be a solution that avoided the “secret list problem” you identified?
PICS is an acronym for the World Wide Web Consortium’s Platform for Internet Content Selection. We have already seen a relative (actually, a child) of PICS in the chapter about privacy: P3P. Like PICS, is a protocol for rating and filtering content on the Net. In the context of privacy, the content was made up of assertions about privacy practices, and the regime was designed to help individuals negotiate those practices.
With online speech the idea is much the same. PICS divides the problem of filtering into two parts — labeling (rating content) and then filtering (blocking content on the basis of the rating). The idea was that software authors would compete to write software that could filter according to the ratings; content providers and rating organizations would compete to rate content. Users would then pick their filtering software and rating system. If you wanted the ratings of the Christian Right, for example, you could select its rating system; if I wanted the ratings of the Atheist Left, I could select that. By picking our raters, we would pick the content we wanted the software to filter.
This regime requires a few assumptions. First, software manufacturers would have to write the code necessary to filter the material. (This has already been done in some major browsers). Second, rating organizations would actively have to rate the Net. This, of course, would be no simple task; organizations have not risen to the challenge of billions of web pages. Third, organizations that rated the Net in a way that allowed for a simple translation from one rating system to another would have a competitive advantage over other raters. They could, for example, sell a rating system to the government of Taiwan and then easily develop a slightly different rating system for the “government” of IBM.
If all three assumptions held true, any number of ratings could be applied to the Net. As envisioned by its authors, PICS would be neutral among ratings and neutral among filters; the system would simply provide a language with which content on the Net could be rated, and with which decisions about how to use that rated material could be made from machine to machine[48].
Neutrality sounds like a good thing. It sounds like an idea that policymakers should embrace. Your speech is not my speech; we are both free to speak and listen as we want. We should establish regimes that protect that freedom, and PICS seems to be just such a regime.
But PICS contains more “neutrality” than we might like. PICS is not just horizontally neutral — allowing individuals to choose from a range of rating systems the one he or she wants; PICS is also vertically neutral — allowing the filter to be imposed at any level in the distributional chain. Most people who first endorsed the system imagined the PICS filter sitting on a user’s computer, filtering according to the desires of that individual. But nothing in the design of PICS prevents organizations that provide access to the Net from filtering content as well. Filtering can occur at any level in the distributional chain — the user, the company through which the user gains access, the ISP, or even the jurisdiction within which the user lives. Nothing in the design of PICS, that is, requires that such filters announce themselves. Filtering in an architecture like PICS can be invisible. Indeed, in some of its implementations invisibility is part of its design[49].
46.
Ann Beeson and Chris Hansen, "Fahrenheit 451.2: Is Cyberspace Burning?" (American Civil Liberties Union White Paper, March 17, 2002).
47.
Not all of these filters function by using blacklists. Two examples of filtering programs that use an algorithmic approach rather than blacklists are PixAlert's SafeScreen (available at http://www.safescreen.net (cached: http://www.webcitation.org/5J6mzZs25)) and LTU Technologies' ImageSeeker (available at http://www.ltutech.com/en/ (cached: http://www.webcitation.org/5J6n39fZm)), the latter of which is supposedly being used by the FBI and DHS in child pornography investigations.
48.
Paul Resnick, "PICS-Interest@w3.0rg, Moving On," January 20 1999, available at http://lists.w3.org/Archives/Public/pics-interest/1999Jan/0000.html (cached: http://www.webcitation.org/5J6n63Y6x); Paul Resnick, "Filtering Information on the Internet,"
49.
See Jonathan Weinberg, "Rating the Net,"