There’s another feature of the CDA/COPA laws that seems necessary but isn’t: They both place the burden of their regulation upon everyone, including those who have a constitutional right to listen. They require, that is, everyone to show an ID when it is only kids who can constitutionally be blocked.
So compare then the burdens of the CDA/COPA to a different regulatory scheme: one that placed the burden of question #1 (whether the content is harmful to minors) on the speaker and placed the burden of question #2 (whether the listener is a minor) on the listener.
One version of this scheme is simple, obviously ineffective and unfair to the speaker: A requirement that a website blocks access with a page that says “The content on this page is harmful to minors. Click here if you are a minor.” This scheme places the burden of age identification on the kid. But obviously, it would have zero effect in actually blocking a kid. And, less obviously, this scheme would be unfair to speakers. A speaker may well have content that constitutes material “harmful to minors”, but not everyone who offers such material should be labeled a pornographer. This transparent block is stigmatizing to some, and if a less burdensome system were possible, that stigma should also render regulation supporting this unconstitutional.
So what’s an alternative for this scheme that might actually work?
I’m going to demonstrate such a system with a particular example. Once you see the example, the general point will be easier to see as well.
Everyone knows the Apple Macintosh. It, like every modern operating system, now allows users to specify “accounts” on a particular machine. I’ve set one up for my son, Willem (he’s only three, but I want to be prepared). When I set up Willem’s account, I set it up with “parental controls.” That means I get to specify precisely what programs he gets to use, and what access he has to the Internet. The “parental controls” make it (effectively) impossible to change these specifications. You need the administrator’s password to do that, and if that’s kept secret, then the universe the kid gets to through the computer is the universe defined by the access the parent selects.
Imagine one of the programs I could select was a browser with a function we could call “kids-mode-browsing” (KMB). That browser would be programmed to watch on any web page for a particular mark. Let’s call that mark the “harmful to minors” mark, or <H2M> for short. That mark, or in the language of the Web, tag, would bracket any content the speaker believes is harmful to minors, and the KMB browser would then not display any content bracketed with this <H2M> tag. So, for example, a web page marked up “Blah blah blah <H2M>block this</H2M> blah blah blah” would appear on a KMB screen as: “Blah blah blah blah blah blah.”
So, if the world of the World Wide Web was marked with <H2M> tags, and if browser manufacturers built this <H2M>-filtering function into their browsers, then parents would be able to configure their machines so their kids didn’t get access to any content marked <H2M>. The policy objective of enabling parental control would be achieved with a minimal burden on constitutionally entitled speakers.
How can we get (much of the) world of the Web to mark its harmful to minors content with <H2M> tags?
This is the role for government. Unlike the CDA or COPA, the regulation required to make this system work — to the extent it works, and more on that below — is simply that speakers mark their content. Speakers would not be required to block access; speakers would not be required to verify age. All the speaker would be required to do is to tag content deemed harmful to minors with the proper tag.
This tag, moreover, would not be a public marking that a website was a porn site. This proposal is not like the (idiotic, imho) proposals that we create a .sex or .xxx domain for the Internet. People shouldn’t have to locate to a red-light district just to have adult material on their site. The <H2M> tag instead would be hidden from the ordinary user — unless that user looks for it, or wants to block that content him or herself.
Once the government enacts this law, then browser manufacturers would have an incentive to build this (very simple) filtering technology into their browsers. Indeed, given the open-source Mozilla browser technology — to which anyone could add anything they wanted — the costs of building this modified browser are extremely low. And once the government enacts this law, and browser manufacturers build a browser that recognizes this tag, then parents have would have as strong a reason to adopt platforms that enable them to control where their kids go on the Internet.
Thus, in this solution, the LAW creates an incentive (through penalties for noncompliance) for sites with “harmful to minors” material to change their ARCHITECTURE (by adding <H2M> tags) which creates a MARKET for browser manufacturers (new markets) to add filtering to their code, so that parents can protect their kids. The only burden created by this solution is on the speaker; this solution does not burden the rightful consumer of porn at all. To that consumer, there is no change in the way the Web is experienced, because without a browser that looks for the <H2M> tag, the tag is invisible to the consumer.
But isn’t that burden on the speaker unconstitutional? It’s hard to see why it would be, if it is constitutional in real space to tell a speaker he must filter kids from his content “harmful to minors.” No doubt there’s a burden. But the question isn’t whether there’s a burden. The constitutional question is whether there is a less burdensome way to achieve this important state interest.
But what about foreign sites? Americans can’t regulate what happens in Russia. Actually, that’s less true than you think. As we’ll see in the next chapter, there’s much that the U.S. government can do and does to effectively control what other countries do.
Still, you might worry that sites in other countries won’t obey American law because it’s not likely we’ll send in the Marines to take out a noncomplying website. That’s certainly true. But to the extent that a parent is concerned about this, as I already described, there is a market already to enable geographic filtering of content. The same browser that filters on <H2M> could in principle subscribe to an IP mapping service to enable access to American sites only.
But won’t kids get around this restriction? Sure, of course some will. But the measure of success for legislation (as opposed to missile tracking software) is not 100 percent. The question the legislature asks is whether the law will make things better off[45]. To substantially block access to <H2M> content would be a significant improvement, and that would be enough to make the law make sense.
But why not simply rely upon filters that parents and libraries install on their computers? Voluntary filters don’t require any new laws, and they therefore don’t require any state-sponsored censorship to achieve their ends.
45.
There is also a doctrine within First Amendment law that might limit the ability of the government to regulate when the regulation is ineffective. See