2. The GNU Project distributes the GNU Privacy Guard, a program that implements public-key encryption and digital signatures, which you can use to send secure and private email. It is useful to explore how GPG differs from treacherous computing, and see what makes one helpful and the other so dangerous.
When someone uses GPG to send you an encrypted document, and you use GPG to decode it, the result is an unencrypted document that you can read, forward, copy, and even reencrypt to send it securely to someone else. A treacherous-computing application would let you read the words on the screen, but would not let you produce an unencrypted document that you could use in other ways. GPG, a free software package, makes security features available to the users; they use it. Treacherous computing is designed to impose restrictions on the users; it uses them.
3. The supporters of treacherous computing focus their discourse on its beneficial uses. What they say is often correct, just not important.
Like most hardware, treacherous-computing hardware can be used for purposes which are not harmful. But these features can be implemented in other ways, without treacherous-computing hardware. The principal difference that treacherous computing makes for users is the nasty consequence: rigging your computer to work against you.
What they say is true, and what I say is true. Put them together and what do you get? Treacherous computing is a plan to take away our freedom, while offering minor benefits to distract us from what we would lose.
4. Microsoft presents Palladium as a security measure, and claims that it will protect against viruses, but this claim is evidently false. A presentation by Microsoft Research in October 2002 stated that one of the specifications of Palladium is that existing operating systems and applications will continue to run; therefore, viruses will continue to be able to do all the things that they can do today.
When Microsoft employees speak of “security” in connection with Palladium, they do not mean what we normally mean by that word: protecting your machine from things you do not want. They mean protecting your copies of data on your machine from access by you in ways others do not want. A slide in the presentation listed several types of secrets Palladium could be used to keep, including “third party secrets” and “user secrets”—but it put “user secrets” in quotation marks, recognizing that this is somewhat of an absurdity in the context of Palladium.
The presentation made frequent use of other terms that we frequently associate with the context of security, such as “attack,” “malicious code,” “spoofing,” as well as “trusted.” None of them means what it normally means. “Attack” doesn’t mean someone trying to hurt you, it means you trying to copy music. “Malicious code” means code installed by you to do what someone else doesn’t want your machine to do. “Spoofing” doesn’t mean someone’s fooling you, it means your fooling Palladium. And so on.
5. A previous statement by the Palladium developers stated the basic premise that whoever developed or collected information should have total control of how you use it. This would represent a revolutionary overturn of past ideas of ethics and of the legal system, and create an unprecedented system of control. The specific problems of these systems are no accident; they result from the basic goal. It is the goal we must reject.
Copyright c 2002, 2007 Richard Stallman
This essay was first published on http://gnu.org, in 2002. This version is part of Free Software, Free Society: Selected Essays of Richard M. Stallman, 2nd ed. (Boston: GNU Press, 2010).
Verbatim copying and distribution of this entire chapter are permitted worldwide, without royalty, in any medium, provided this notice is preserved.
Chapter 33.
Who Does That Server Really Serve?
Background: How Proprietary Software Takes Away Your Freedom
Digital technology can give you freedom; it can also take your freedom away. The first threat to our control over our computing came from proprietary software: software that the users cannot control because the owner (a company such as Apple or Microsoft) controls it. The owner often takes advantage of this unjust power by inserting malicious features such as spyware, back doors, and Digital Restrictions Management (DRM) (referred to as “Digital Rights Management” in their propaganda).
Our solution to this problem is developing free software and rejecting proprietary software. Free software means that you, as a user, have four essential freedoms: (0) to run the program as you wish, (1) to study and change the source code so it does what you wish, (2) to redistribute exact copies, and (3) to redistribute copies of your modified versions. (See “The Free Software Definition,” on p. 3.)
With free software, we, the users, take back control of our computing. Proprietary software still exists, but we can exclude it from our lives and many of us have done so. However, we now face a new threat to our control over our computing: Software as a Service. For our freedom’s sake, we have to reject that too.
How Software as a Service Takes Away Your Freedom
Software as a Service (SaaS) means that someone sets up a network server that does certain computing tasks—running spreadsheets, word processing, translating text into another language, etc.—then invites users to do their computing on that server. Users send their data to the server, which does their computing on the data thus provided, then sends the results back or acts on them directly.
These servers wrest control from the users even more inexorably than proprietary software. With proprietary software, users typically get an executable file but not the source code. That makes it hard for programmers to study the code that is running, so it’s hard to determine what the program really does, and hard to change it.
With SaaS, the users do not have even the executable file: it is on the server, where the users can’t see or touch it. Thus it is impossible for them to ascertain what it really does, and impossible to change it.
Furthermore, SaaS automatically leads to harmful consequences equivalent to the malicious features of certain proprietary software. For instance, some proprietary programs are “spyware”: the program sends out data about users’ computing activities. Microsoft Windows sends information about users’ activities to Microsoft. Windows Media Player and RealPlayer report what each user watches or listens to.
Unlike proprietary software, SaaS does not require covert code to obtain the user’s data. Instead, users must send their data to the server in order to use it. This has the same effect as spyware: the server operator gets the data. He gets it with no special effort, by the nature of SaaS.
Some proprietary programs can mistreat users under remote command. For instance, Windows has a back door with which Microsoft can forcibly change any software on the machine. The Amazon Kindle e-book reader (whose name suggests it’s intended to burn people’s books) has an Orwellian back door that Amazon used in 2009 to remotely delete Kindle copies of Orwell’s books 1984 and Animal Farm which the users had purchased from Amazon.[1] SaaS inherently gives the server operator the power to change the software in use, or the users’ data being operated on. Once again, no special code is needed to do this.
1
Brad Stone, “Amazon Erases Orwell Books from Kindle,”